There are a handful of ethical questions we need to ask ourselves regarding the use of AI (Artificial Intelligence) when using it to manipulate images. The first thing that comes to mind is The Terapixel Panorama (linked here). Using technology to display things exactly as they are, down to the brush stroke, is very different from manipulating an image. What was done for The Terapixel Panorama would not be considered modifications using AI.
Modifications done using AI use a probabilistic model to try to determine the most likely outcome. In text, it will say what you want to hear. For AI colorization, it will take a guess by looking at the exact shade of the black and white color and guess what the likely color is, ignoring context. That means strong and bold colors are very often absent, leading to very muted colorizations. In the case of the Golden Gate Bridge, it was wrong, as checked with by experts (from what we discussed in class).
My Selected Image

Interior of the chapel during a meeting of the men students

My zoomed in, colorized version done using AI.
Immediate Thoughts
The background looks very convincing, the wooded brown color looks realistic, although the changes in shade do seem a bit extreme. It’s as if more than just lighting affected it, the shade of brown really does feel changing. In real life, it was most likely the same color. When we look and try to verify the accuracy of the colorized version, it would be important to ask someone who knows the Carleton Archive and History to assist.
The AI doesn’t process that the American Flag is the American Flag, so it selected colors that were clearly not correct. Obviously, those were not the colors of the flag. In the black and white image, it looks like the people sitting down are not wearing the exact same shade of a color. but we can not know what the color specifically was. The AI sort of just makes leaps at guessing the colors. I wouldn’t trust that the AI blindly in determining that two of the men were wearing bright red.
My Omeka item link will be listed here.
Compressing a file requires two steps: first, the encoding, during which the file is converted into a more compact format, and then the decoding, whereby the process is reversed. If the restored file is identical to the original, then the compression process is described as lossless: no information has been discarded. By contrast, if the restored file is only an approximation of the original, the compression is described as lossy: some information has been discarded and is now unrecoverable. Lossless compression is what’s typically used for text files and computer programs, because those are domains in which even a single incorrect character has the potential to be disastrous.
https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web
This seems like a lot of technical nonsense if you aren’t very knowledgeable. I don’t really understand the process well myself, but I do notice when things are particularly poorly done. I have used AI before to artificially enhance images I wanted to zoom in on. The results vary, but I find it works well with digital renders of fantasy settings. Usually where tiny imperfections in the image enhancement on something like a tree would not matter.
However, in the case of enhancing the image I selected…:

The zoom in is not of particularly high quality since in comparison to the original image, I zoomed in on a very small portion. What happens as a result is that you get a lot of very creepy faces. The background looks a bit artificial but for things like faces, they really need to be done accurately. The fidelity of the face in this picture was not high enough for image enhancement to be a good idea at all.
Ethical Issues
A lot of care and time needs to be placed into understanding what exactly is done with data. A careful verification needs to be done to make sure that what is made and posted is reliable. If misinformation begins, it can create a chain of incorrect information being spread, for example. As with the Golden Gate Bridge, if the colorized image with the white and red was spread without knowing the truth behind it, there could be a lot of people who believe the image to be fact, when it is incorrect.
I’d like to extend her caution to the metadata generated by artificial intelligence that increasingly undergirds source collection and discovery. As artificial intelligence generates data about data, guided by schemas, ontologies, and ways of seeing built into algorithms that will guide our search and aggregation, attention to the metadata and rethinking how artificial intelligence generates it will likely become a key part of historical research.
https://academic.oup.com/ahr/article/128/3/1354/7282256?login=true
Staying ahead of things and being careful before things spiral is critical in ensuring that Artifical Intelligence is used responsibly. It really shouldn’t be an afterthought. Once the sources can’t be trusted, you cant reliably use it and you do not have faith in them. If that happens to something you are writing, it is really difficult to undo the damage, so it is best to be mindful of how AI is used.