Gutenberg GPT
A dispatch on AI and religious reform
“Typography is not only a technology but is in itself a natural resource or staple, like cotton or timber or radio; and, like any staple, it shapes not only private sense ratios but also patterns of communal interdependence.”
—Marshall McLuhan, The Gutenberg Galaxy
“The Internet gave us access to everything; but it also gave everything access to us.”
—James Veitch
It’s been a few months since I released my episode, “Pandemic: A Plague Coda”, in which I reflected on the massive changes made possible by the forces created by the Black Death of the 14th century. I’ve since discussed those changes on a couple of awesome podcasts and how those changes related to the changes that could possibly come around thanks to the COVID-19 pandemic and the thing that has stuck with me most is the rise of Protestantism. I’ve made no secret that my favorite Hardcore History podcast episode is the 2013 masterpiece “Prophets of Doom”, in which Dan Carlin discussed the Anabaptist Rebellion in Muenster. This was something I referenced in my “Pandemic: A Plague Coda” episode, along with the role the printing press and the subsequent translations of the Bible. Thanks to this and hearing for what is now decades the references to the internet being the biggest informational technology breakthrough since Johannes Gutenberg’s revelatory invention (a perfection of pre-existing Chinese technology, it must be noted), I’ve started wondering—and speculating—what the 21st century equivalent of translating the Bible into local languages is going to be.
On my appearance on the Reckless Musecast with Joe Garza and Ben D’Alessio, we were spitballing some ideas on this front, and, off the top of my head, I speculated that this revelatory change might possibly—or likely?—have to do with the rise of what we’re calling “artificial intelligence” (and even though I don’t think ChatGPT and the like constitute “real” artificial intelligence, that’s just my sci-fi nerd brain; we’ll stick with AI for the purposes of this essay). I won’t try to speculate specifically what AI will do or how it will form, but because I like speculating about the future as much as I like discussing the past, I want to try and use the past to untangle what kind of effects AI could have, similar to the translations of the Bible into local languages in the 15th and 16th centuries.
When it comes down to it, the rise of AI in the modern era and the mass translations of the Bible represent two profoundly transformative moments in history; socially, politically, culturally. While seemingly disparate, these phenomena share remarkable parallels within the context of their respective emerging technologies. Let’s break this down as best we can.
As we know, the internet has revolutionized how we access information and how we connect with other people on both local and global scales. This has provided the necessary infrastructure for AI to thrive. Similarly, the Gutenberg printing press facilitated the mass dissemination of Bible translations, which let far, far more people access and interpret religious texts independently of the clergy. This also likely helped incentivize people to become more literate, thus spreading individual interpretation—and thus individualism—ever wider, paving the road for more individualistic philosophies of the Enlightenment to thrive in the 17th and 18th centuries. How this might relate to AI is hard to say, but it’s hard not to imagine that AI will create similar incentives toward learning new skills that could have profound impacts on the wider culture.
This leads us to the theme of accessibility and democratization, which we know translations of the Bible did, and will likely result from the improvement and proliferation of AI. Because the Gutenberg printing press empowered people to access and interpret religious texts in their local languages, this made it easier to challenge the authority of the Catholic Church. And as I covered in “Pandemic: A Plague Coda”, the Black Death had already seen the beginning of this trend, both thanks to the rhetoric of groups like the Flagellants and to regular people having eyes and ears to see and hear what the more nefarious among the clergy were doing. If we’re to assume AI will further democratize knowledge—which arguably has been the broadest revolutionary characteristic of the internet—this could grant even more widespread access to information and expertise that was once limited to a privileged few. The implications of this are so broad that the question du jour of today—that of misinformation—doesn’t even really matter, especially if we trust a well-programmed AI not to fall prey to biased interpretations of reality.
The potential for challenging all existing power structures is just as profound now as they were in the 16th century. Because Church authority—which was already in crisis by the time the Bible was being translated and mass produced—was challenged, a vacuum was created, and created the rise of new institutions (i.e. radical sects and cults, Protestant churches, and, eventually, more secular statehood). We’ve already seen this occur in recent years with the aforementioned issues regarding misinformation (that has certainly become something of a moral panic, but that’s a different conversation). Whether AI is able to nip that issue in the bud or not is irrelevant since it has the potential to disrupt existing power dynamics in domains like healthcare, finance, and governance, challenging traditional hierarchical structures of today. As we know, the Church, while still a cultural institution with a lot of global power, is not a particularly important institution to the function of modern society, or at least isn’t seen as such. We’ve seen trust in some pretty fundamental institutions—such as the press or the public health sector—take blow after blow in polls, showing no sign of recovering, at least anytime soon. This isn’t to say that AI will provide a magic replacement, but it certainly could, like a Biblical translation, facilitate and incentivize people to create their own. This is neither inherently positive or negative—just as the translation of the Bible, at the time and not in hindsight, was neither positive or negative—but it is significant, or at least has the possibility of being significant and, like the translation of the Bible, it will probably produce conflict, as anything that transforms culture does.
The translations of the Bible brought about linguistic and cultural shifts, contributing to the development of local languages and shaping cultural identities. Similarly, AI, with its ability to process vast amounts of data so quickly, can shape cultural trends, influence artistic expressions, and even redefine creative processes; we haven’t seen exactly how these things will change, but we know people in creative sectors of our society are already voicing concern, if not outright panicking. There’s a reason that if you look at the Writers Guild of America’s demand document, you’ll see that AI is a prominent feature. I personally try to withhold too much judgment related to AI, but it’s not a mystery to me why people—particularly creatives—have concerns. But those concerns will (and in some ways already have) go well beyond the creative arts. Because what AI can do—essentially automate several other industries once thought human-centric—is exactly what Andrew Yang was warning us about. If you thought unemployment during COVID-19 was bad, you haven’t looked at the bigger picture. Ad agencies, eLearning companies, and manufacturing are all at risk, but they were already at risk. Banking, financial analysis, healthcare administration, insurance companies, and, once automated cars work out all the kinks, commercial and public transportation are all facing various levels of risk. And while it’s best not to get too excited, things like media and even education as we know it could find themselves increasingly irrelevant depending on how AI develops and pervades our lives. It wouldn’t be a world of the poor 99% versus the rich 1%; it would be a world of the consuming 99% versus the programming 1%. And while this seems to be propelling us away from the post-printing press/post-Biblical translation society, it’s important to remember that none of these industries or anything really like them existed 500 years ago. It was the monarch, the Church, the guilds, and the peasants. We arguably got to where we are now—and what we may well lose—because of what the printing press started. Much was lost to chaos after the decentralization of faith and authority, and much was also gained. That so few think something like that could happen again is perhaps the most troubling trend, especially since the stakes—as demonstrated by the violence and conflict that dominated Europe for the centuries that followed the Reformation—are so damn high.
As I covered in “Pandemic: A Plague Coda”, the Anabaptist Muenster Rebellion and the French Wars of Religion exemplified the conflicts that emerged from the mass translations of the Bible, but that wasn’t all. The insanely destructive Thirty Years’ War of 1618-1648—up to eight million dead—was yet another conflict enabled by the changes created by the printing press and the translations of the Bible. Silly as that sounds, it’s not when you realize that without the rise of Protestantism—made possible by those technological and cultural breakthroughs—you wouldn’t have any of those conflicts, including the Thirty Years’ War. While it’s nowhere near as murderous or totalizing, we already see the beginning of a schism between what we can call pro-AI and anti-AI factions. When that schism becomes dominated by more than just material concerns—that is, ethical concerns—that is when the risk of existential conflict forms in the tea leaves.
It is by no means overstating it to say that the mass translations of the Bible raised ethical and moral questions. The translations of the Bible introduced new interpretations and challenged existing religious doctrines, leading to debates, conflicts, and, as mentioned, war. AI is not exempt from this pattern just because we live in a modern society. AI presents ethical dilemmas related to privacy, algorithmic bias, and the consequences of delegating critical decisions to machines. Addressing these ethical considerations is essential to navigate the potential conflicts and disruptions arising from AI. We already saw conservatives accusing ChatGPT of having a left-wing bias, and while it’s certainly true that the responses generated by it have a certain milquetoast progressive stank to them (at least in my opinion), a virtual intelligence can’t really have a bias. But that’s not the point; the point is that this is one of the schisms already showing its face regarding AI—that is, a political schism. If you have one group of people who believe that the AI looks at the world in a non-objective way or, more threateningly, a way that your enemies look at it, you have the recipe for AI-centric factions to brew. We could see clashes occurring between defenders of AI embracing its potential and those skeptical about the erosion of human agency and the ethical implications of AI systems—and that likely would be the core philosophical difference—but, as always with human beings, it would manifest in far cruder ways. Remember the story Christopher Hitchens told about telling his Irish interlocutors that he was an “Atheist Jew” and they paused before asking “Protestant Atheist Jew or Catholic Atheist Jew”? Call me cynical (and it’s been suggested), but I see the embrace or rejection of AI as manifesting in no less a silly way.
We’ll likely see this shake out in the political realm in an official capacity as well, and in the not-too-distant future. Conflicts may arise around the governance and regulation of AI technologies. Pro-AI factions may argue for limited restrictions and regulations to foster innovation and progress, while anti-AI factions may call for comprehensive oversight and control to mitigate risks. Debates on algorithmic transparency, accountability, and the potential for AI systems to perpetuate biases or engage in unethical behavior could lead to conflicts over the appropriate governance and regulation of AI. These kinds of debates were constant in the 16th and 17th centuries and were at the core of all the aforementioned religious wars and conflicts. The established Church authorities held significant power and influence over religious doctrine, interpretation, and societal norms. Their position was defended staunchly, often against challenges from reformers or individuals advocating for alternative interpretations of religious texts. They represented the status quo and defended their traditional roles and power structures. They resisted changes that challenged their authority, fearing potential disruptions to the existing social, political, and cultural order. They often presented their positions as necessary for the salvation of souls and the greater good of society and argued that their interpretation of religious texts and their authority were crucial for maintaining moral order and guiding individuals on the correct path to salvation. In their own medieval way, by clamping down on doctrinal heresy and dismissing reformers, they were engaging in institutional regulation.
The conflicts and wars fought between Protestants and Catholics were never about anything more complex than two different ways of looking at existence. One could arguably even say that that’s the nature of all conflicts and wars, but let’s stay focused. The same must be said about AI. The strife, conflicts, and possibly even wars (never rule it out) that could erupt in the coming years won’t be about AI. But I think given what we know about what new information technologies have facilitated in the past, especially with the correct combination of social, psychological, and economic forces at the correct time, we would do well not to pretend nothing could happen. It may not turn out that artificial intelligence is the linchpin on which the internet truly changes everything we recognize about modern society. For all we know, this whole thing could come crashing down when we get hit with a powerful enough solar flare and we won’t have to worry about our coming technological overlord. But of all the things history does, it best reveals the kinds of ways our species responds to those trends and forces so many of us love discussing, and it seems to always have something to do with division and conflict, often propelled by some kind of technological advancement. It is without question that in the field of information technology, nothing even came close to Johannes Gutenberg’s magical device in 1440 to Tim Berners-Lee’s World Wide Web bursting out of CERN in 1991. And the idea that an invention so momentous having such grand-scale effects on civilization almost six centuries ago not happening again with an invention just as momentous in our future strikes me to be the height of modern arrogance.



