Colorizing Carleton

Carleton’s Digital Archives contains many images of life at Carleton over the years, but the photos are black and white. I tasked an AI model from DeepAI to colorize a picture of two friends living in Gridley Hall, circa 1894. My Omeka entry can be found here, and images of the results are inserted below.

In my opinion, the image is still not as lively as the real color image would have been. In class, we talked a lot about how AI models predict the average from a bell curve model of what the actual color would be. Intense, striking tones are less likely to be correct, so the machine settles for dull, brown tones. Although it does add some character to the photo, I find it hard to believe that the colorized image is perfectly accurate.

This problem leads me to my biggest issue with AI and image manipulation: accuracy. As we saw in class, AI struggled to accurately recreate an image of the Golden Gate Bridge and also significantly lightened the skin tones of a woman in a different image. With the current models trained to simply fill in gaps as best they can, we cannot trust the accuracy of recolorizing. The article by Ted Chiang in the New Yorker provided a metaphor for ChatGPT as a blurry picture of the whole internet – you can kind of make out what is going on, but not to full accuracy. Chiang says “[ChatGPT] retains much of the information on the Web…but, if you’re looking for an exact sequence of bits, you won’t find it; all you will ever get is an approximation”. As AI image manipulation becomes more prevalent, we have to remember that everything it creates is based on algorithms, and the averages can never be 100 percent accurate. This doesn’t just affect recolorization. The article about the effect of AI on the field of art history talks about a company that is using X-ray imaging to view what is underneath a painting, along with other works from an artist to generate “new” works of art from that artist. Drimmer argues that “These recreations don’t teach us anything we didn’t know about the artists and their methods”, which is what the field is truly interested in. Through her perspective, AI isn’t adding anything useful to art history, but rather it is creating potentially falsified images that alter the true nature of art history. I agree – I’m not convinced that the benefits of AI image manipulation are outweighing the costs.

4 thoughts on “Colorizing Carleton

  1. Megan, your post raises excellent points about the limits of AI image colorization and the tension between authenticity, recreation and artificial image enhancement. I especially like how you connect your own experiment with the broader ethical and historical implications of AI in art. I similarly used Ted Chiang’s “blurry picture” metaphor, which I believe perfectly captures the idea that AI offers an approximation but not accuracy. I agree that while these tools can make archival materials more engaging and can help us observe the past in a new light, they risk distorting our understanding of the past if we mistake artificial enhancements for historical accuracy.

  2. Megan, I really liked your thoughts about the types of colors this technology chooses and how they fall in the middle of the bell curve. I think that Ted Chiang’s “blurry picture of the whole internet” is the perfect quote to illustrate your point. While the colors aren’t terribly inaccurate, they are also not true, nor do they add anything new that couldn’t be found in the original black and white images. After reading through a few blog posts, this skepticism seems to be a theme. I struggle to imagine a legitimate application for this technology, outside of, perhaps, artistic or experimental purposes.

  3. Hi Megan! I agree with what you said about AI tools can’t really add anything new to the world, and we should realize that the colorized photos are not accurate. When I process my photo with DeOldify, I also noticed that the color tone is not that realistic, no matter how you adjust the render factor. The point about falsification is also really true, and bias in the training data of AI tools should be mentioned whenever we use such models.

  4. Megan, I really enjoyed the quotes you pulled for your argument, particularly the one that references AI as an “approximation”, as I found this to be incredibly relevant in image colorization. Approximations are not necessarily inaccurate, but they lack detail and context that can root an argument, or in this case, an image. I agree that the colorized image does not in fact add enough dimension for it to seem revolutionary, though it is at least somewhat interesting.

Leave a Reply to Gabriel Cohen Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

css.php