I used DeOldify to colorize a picture from the Carleton Archives and uploaded it to Omeka, Carleton’s digital exhibit space. Here is my colorized photo added to Omeka. Just going through this process of colorization revealed to me many potential ethical issues in the field. For one, as we all know, in class, we found that the tool DeOldify had been updated. Yes, the newer version was much easier to use because it was simplified, but the underlying code and the description of what was happening behind the scenes disappeared. Therefore, when adding the Omeka description for my item, I wasn’t sure what to put. For example, it was not clear to me what the Render factor was. The ambiguity surrounding what artificial intelligence actually does can raise a range of ethical issues, from authorship and citations to potential manipulation and discrimination within the code.


My Biggest Concern
As a prospective political science major, my biggest concern about the combination of artificial intelligence and image manipulation is the potential for the spread of misinformation. Fake images created with malicious intent can have drastic effects on maintaining peace. Especially given the number of people who have access to technology, misinformation has the potential to cause widespread panic. For example, if an adversary of the US promoted a misinformation campaign with image manipulation, it could be hard to determine what is real. In Sonja Drimmer’s piece on AI hijacking art history, she points out that “this effort to ‘bring events back to life’ routinely mistakes representations for reality.” In her argument, she is generally referencing actions that are not intentional. However, with the tools available today, if someone had the intention to create a new reality, it is very possible. Similarly, in Lauren Tilton’s “Relating to Historical Sources,” she references a quote by Jessica Marie Johnson, who warns “the drive for data and the building of databases for historical research are not outside of histories of commodification, exploitation, and violence.” These databases are prone to human biases, and therefore, exploitation and violence can be ingrained in the information that artifical intelligence spews out. Once again, someone with malicious intent could easily utilize the underlying techniques that current artifical intelligence relies upon to further increase levels of exploitation and violence. If we are not careful in our understanding of the processes of artifical intelligence, there could be costly consequences. As of now, these ethical considerations will always be admist the AI conversation, which is why I still believe that widespread AI literacy is extremely imporant. I am sure it is already happening, but I could see governments continueing to enlist advanced tools to combat the spread of misinformation with image manipulation
I like how you tied your concern back to your potential major. I think malicious intent with AI usage can have different impacts within different fields, and in political science it is especially important to combat the spread of false information because of the broader implications it can have on public opinion. I’m curious about ways we can educate the public about recognizing misinformation and AI-manipulated images.
Dylan, I think that you make a very strong and thoughtful argument about the dangers of misinformation and the ethical concerns regarding AI image manipulation. Your point about how easily fake images could disturb trust and peace is especially powerful. I also like how you bring in Drimmer and Johnson to emphasize that bias and exploitation are already embedded in our systems. I agree that AI literacy and transparency are essential to prevent these technologies from being weaponized.
Hey Dylan, I totally agree with your ethical concerns related to how this tool could be used in political misinformation. As technology continues to get better, everyone will need to develop much better technical literacy (as well as a stronger need for fact checking on platforms). How do you think the launch of SORA AI will impact political misinformation?