In November 2022, OpenAI released ChatGPT, a conversational interface to GPT, their Large Language Model (LLM). ChatGPT took the world by storm, becoming the largest-growing product ever. Millions of people opened a ChatGPT account and began interacting with ChatGPT. The answers of the LLM astounded everyone. Sometimes because of the full-paragraph, conversational-like answers, others because it could summarize and outline. It retained context within the same “conversation”, which enabled users to refine their questions and get better answers.
Also, it generated the illusion that you were talking with someone, that there was some kind of mind behind the LLM.
Emily Bender, a faculty professor at the University of Washington and director of the Computational Linguistics Laboratory, coined the term “stochastic parrots” to refer to Large Language Models. You are not a parrot. And a chatbot is not human. In Bender’s view, a Large Language Model is a construct that generates words based on probability distributions.
Bender argues at length about the problems of the computational metaphor, specifically about the metaphor that the human brain is a computer, and a computer is a human brain. This notion, she says, affords “the human mind less complexity than is owed, and the computer more wisdom than is due.”
Part of the problem is the widespread use of anthropomorphic terms to refer to functions in Large Language Models1. Like a ChatGPT hallucination, our minds associate the vast repertory of science-fiction plots where machines are sentient with the future evolutions of current AI systems.
For example, the term “intelligence” when used in Artificial Intelligence generates confusion, because we unconsciously associate it with human intelligence. The term AGI is also equivocal. AGI, which stands for Artificial General Intelligence, is used to refer to a today nonexistent AI model that is not only predictive but is intelligent, makes its own decisions, and is maybe sentient. Also, some doomsday critics of Artificial Intelligence are afraid that these systems may be hiding something or lying to us, verbs that imply intention and free will.
We don’t know if we are near the Singularity
The myth is that machine superintelligence is inevitable and near, while the fact is that it may happen in decades, centuries, or never. AI experts disagree, and we simply don’t know. In an interview with Lex Fridman, OpenAI CEO Sam Altman recognizes that there is no clear indication that Large Language Models are the way to Artificial General Intelligence. However, he keeps saying that he is a little bit scared, or suggesting that anyone who knows something about current AI should be somewhat scared.
As Cory Doctorow explains, “AI isn’t ‘artificial’ and it’s not ‘intelligent.’ ‘Machine learning’ doesn’t learn. (…) ChatGPT is best understood as a sophisticated form of autocomplete – not our new robot overlord.” Let’s try to use more precise terms when referring to technology, or at least make it clear from the context what we are referring to.
In the same article, Doctorow even wonders if this has been done intentionally by companies like OpenAI to create a hype cycle. ↩︎