My process experimenting with text analysis revolved around comparing Voyant tools and Google’s Gemini. First, following instructions from Salter, I fed Gemini numerous prompts to help me analyze text from Histology of medicinal plants which I found through Project Gutenberg. The AI prompts requested that Gemini identify, count, and visualize words or phrases that recurred the most. At the start of my prompting process, I was pretty impressed by the speed and depth of the information that Gemini was able to spit back at me. However, when it came to me asking it to generate images, there were more issues. For one, I kept getting word-only responses after prompts asking for images. This issue was easily solvable though once I realized there was an image creating mode I was supposed to be using. I was able to generate three different text analysis visualizations, but words are very commonly spelled wrong in the images. Overall though, I thought Gemini was doing a good enough job until I compared it to Voyant’s analysis.
Voyant’s results made me realize not only are Gemini’s visualizations unappealing, but also that the images and analysis were inaccurate. Most of the words identified by Gemini as the most frequent did not match Voyant’s corpus, and when they did, Gemini’s frequency tally was lower by large margins.




This process did leave me with some concerns about the power of AI. Something I think we need to be careful about fully believing AI responses without some sort of cross referencing, the inaccuracy of Gemini’s responses was pretty disappointing. Additionally the analysis it gives is pretty incredible without any context or explanation behind the analytical choices that are being made. I’m curious about how other AI platforms would perform when it comes to analyzing text.