
DeepAi Colorized Image
For this week’s lab, I worked with an image from the Carleton Digital Collections, an archival photograph of three Carleton students sitting on a bed, studying and looking directly at the camera. I converted the original file format into IIIF so it could be processed by DeepAI, and then used DeepAI’s colorizer to generate a colorized version of the photograph. The results were both fascinating and unsettling. Some elements look reasonably realistic—the patterns on the bedspread, the floorboards, and the clothing—but the faces are poorly rendered, and the student in the back appears almost ghostlike. The bulletin board in the background turned an artificial bright red, a historically improbable color that DeepAI appears to have invented. These imperfections raise important ethical questions about AI image manipulation, especially in a digital humanities context that values authenticity, documentation, and critical engagement with historical materials. As Sonja Drimmer reminds us, “This effort to ‘bring events back to life’ routinely mistakes representations for reality. Adding color does not show things as they were but recreates what is already a recreation.” DeepAI’s output illustrates this perfectly: while the image becomes more visually “alive,” it also becomes less historically trustworthy. A second quote from Drimmer highlights the broader stakes: “When AI gets attention for recovering lost works of art, it makes the technology sound a lot less scary than when it garners headlines for creating deep fakes that falsify politicians’ speech.” Even a seemingly harmless colorization tool participates in the same ecosystem of AI manipulation technologies. If an algorithm can fabricate plausible color, texture, and lighting for archival images, this same mechanism can create (or convincingly distort) images in ways that mislead viewers, shape narratives, or rewrite historical memory. Using DeepAI made me aware of how easily we could misrepresent the past, even unintentionally. DH practitioners need to be transparent about what AI-generated images represent and what they do not represent. Colorization can be a useful interpretive layer, but it must be clearly labeled as speculative, not authoritative. The convenience of these tools should not overshadow the responsibility to avoid fabricating historical accuracy where none exists.