With all the hype around ChatGPT, large language models (LLM) and AI, and the possibilities of that technology, we don’t really think about what it all means. Sure, there are people hyping the topic—on YouTube, on Twitter, and everywhere else they find an audience. Yet, we lack reflection.

Language

The discussions about ChatGPT and LLMs are very much focused on the Western world and Western world-views, with most of the training data being in English. If the distribution follows roughly the general distribution of texts on the internet, it'll be more than 50 %, with Russian, Spanish, French and German as runners-up with between 4 – 5 % each.

Language isn't just about communication, language is deeply linked to culture, identity, and belonging. Monolingualism is the exception, multilingualism the norm. In some areas, people use a vernacular, a standard language and maybe even a lingua franca to communicate across linguistic divides. Most of the nuance is lost in the current discussions, as LLMs further promote English as the global lingua franca. That's not without precedent—until not too long ago, a large majority of European literature was written in Latin. Maybe English will have the same fate.

Hume's Guillotine

One significant concern I see regarding AI is the is-ought problem. As articulated by the philosopher David Hume in the 17th century, the is-ought problem highlights the fallacy of attempting to derive normative statements (what «ought» to be) from purely descriptive statements (what «is»).
But here, the problems only start! What «ought» to be? This question is surprisingly difficult to answer—apart from a prohibition of slavery, torture and genocide, there is not a lot of consensus on what's «just». I think the discourse on AI and LLM lacks humility. How can we expect a system we barely understand solving issues we as humanity have been thinking about for millennia?

Knowledge

In recent weeks, I've encountered numerous historical comparisons related to AI, ranging from word processors and the discovery of fire and electricity to more peculiar examples. The one comparison that resonates with me the most is the advent of modern printing by Johannes Gutenberg. Before the invention of letterpress printing, knowledge was considerably more centralised. Similarly, one of the pillars of the Reformation was making the Bible available in the vernacular, rather than just Latin, so that it could be understood by the masses.
Before this era, scholars and clerics held the knowledge and passed it on to the general public. They functioned as gatekeepers.

For those of us who remember the days before smartphones, we grew up in an era where gatekeepers (editors, scholars etc.) controlled access to information.
With the rise of the Internet, sites like Wikipedia, and now AI, the traditional systems of knowledge and their gatekeepers are once again being challenged. These technological advancements are gradually reshaping how information is disseminated. AI is just one more, admittedly rather big, step in a process that has been going on in cycles for a long time. Just as we adapted before, we’ll adapt again.

Overreliance on LLMs could potentially lead to a loop where the creation of new ideas becomes rare. Logical thinking is replaced by retrieving information and reproducing pre-existing content. What seems to be new and innovative concepts and ideas is, in fact, nothing more than a fractal, the same pattern repeated over and over.

I think the discourse on AI and LLM lacks perspective. Yes, LLMs are a huge technological leap, and have countless applications with the potential to better our lives. But let's not forget what we don't know; We're all just on our pale blue dot in the vast nothingness of space.
We're at a crossroads. The direction we're going to take depends on how you view the human condition. Are we, deep down, good or bad? Hobbes or Rousseau?