I chose The King James Version of the Bible from Project Gutenberg because I thought it would be fascinating to see what patterns might emerge in such a large and widely studied text.
Attempt 1: Google Gemini

I started by trying to follow the AI instructions using Gemini, but that turned out to be pretty frustrating. The model didn’t want to generate any visualizations, which made it difficult to get a sense of the text beyond raw numbers. It did manage to produce a table of the 20 most common words, which was interesting to look at, but left me wanting a more visual and interactive way to explore the text.
Attempt 2: Voyant Tools
This led me to try out Voyant Tools. Uploading the Bible there gave me a clearer and more engaging overview of the text. The word cloud of the top 50 words was pretty neat, however, when I tried exploring some of the other visualizations, the data started to feel overcrowded and the graphs were difficult to interpret.

Attempt 3: R Studio
Out of curiosity, I decided to move over to the tried-and-true RStudio and do some of my own text analysis (You can view the code if you’re curious but please know it was a pretty hasty endeavor). After cleaning up the data, I created two new visualizations. The first was a bar chart of the most common words, which provided a clearer visual representation than a simple table of counts. The second was a sentiment analysis showing the top ten positive and top ten negative words, based on a predefined sentiment dictionary.


Reflections
In the end, this project showed me both the possibilities and the limits of computational text analysis. Automated tools and AI models can produce results quickly, but they can also mislead if the context is ignored, obscure biases in their training data, or oversimplify complex texts into metrics and charts. Tools like Voyant and R can definitely make massive texts more approachable, but human interpretation still matters. We need to approach computational analysis critically, always questioning the methods and verifying findings, rather than taking AI-generated outputs at face value.
I’ve done a lot of close reading and manual textual analysis in other classes, and those experiences often lead to deeper insights and more meaningful discussions than simply seeing which words appear most often. The data is useful for noticing patterns, but I believe that it’s our interpretation that brings the text to life.
I really like how you showed off three different ways that someone can use text analysis to interpret literature. I was particularly fascinated by RStudio, considering I’ve never used it before, but it seems very useful. Obviously, there is a greater learning curve considering you have to write the code yourself; however, it offers way more customizability for what you want to visualize. I also completely agree with your point about human interpretation; these text analysis tools are great, but these visualizations aren’t very useful without context and interpretations provided by us humans.