Adding Color to Old Carleton Photos

Carleton College’s digital archives contain a vast library of photos taken around campus since the institution’s start. The photos however, are almost all in Black and White, due to limitations in older photography materials. I asked the AI image generator and editor “Deep AI” to colorize a photograph that I selected from Carleton’s archive. The photo is of a busy path of students walking in the fall, by Carleton’s “bald spot” near the Sayles building. I take this path every single day, and thus wanted to see if AI could properly recreate the image’s real-life colors. After artificially editing the image, I uploaded it to the recolored image database on Omeka.

Overall the newly colored photo looks pretty good. The colors clearly recreate a fall day, and the missing leaves on the trees indicate that this is the correct time of year. However, the more you look at the image the more faults you can observe. For instance, in the back of the image there is a spot of green grass that feels out of place from the rest of the image. Similarly, if you zoom into the students, their faces are red and green and not accurate depictions of students. This is a key problem relating to AI and image generation; it can never be precise and will always have faults. As stated by journalist Ted Chiang, an AI edited image is just an “approximation of the original, the compression is described as lossy: some information has been discarded and is now unrecoverable. “

AI color generators simply “fill in the gaps” of what it recognizes and struggle with properly recreating the specifiic colors that exist in the real world. For instance, when we attempted to artificially add color to a photograph of the Golden Gate Bridge, the bridge’s red colors were not conveyed or properly recreated through the AI generator. This concept can be applied to all AI databases and websites. Chat GBT and Gemini for instance are both helpful tools, but are never as reliable as they let on. As put by Chiang, Chat GBT “retains much of the information on the Web, in the same way that a JPEG retains much of the information of a higher-resolution image, but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation.” To the same extent, Deep AI’s colorized photo is just an approximation of the original black and white photo’s colors. While it looks great and accurate at a first glance, the cracks appear the longer you observe the AI’s recreation.

Is it ethical to use AI to look at photos of the past? Recreated photos will never be truly accurate and will always have faults. However, AI can be a useful tool to view the path through a new lens. I believe that it is ethically fine to use AI for purposes such as recoloring images but it is crucial to note that it will never be a real recreation.

2 thoughts on “Adding Color to Old Carleton Photos

  1. This is a thoughtful exploration of AI’s role in historical imaging. I completely agree that AI-generated images are merely approximations rather than accurate recreations. your examples of the discolored faces and grass perfectly is really compelling. Your connection to Ted Chiang’s lossy compression analogy is particularly insightful, offering a powerful framework for understanding how AI tools inevitably discard nuance while creating compelling but imperfect representations of reality.

  2. Your idea of bringing life back into historical images by coloring them is inspiring, and the photos you chose really make the effect pop. Seeing those older, monochrome scenes turn into more vivid versions draws me into the past in a new way, making the history feel more immediate. Great job picking images that respond well to color, and for making the whole project feel engaging and meaningful!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php