Kevin Kelly on Artificial Intelligence (AI), from an interview by Noah Smith:
[R]ight now machine learning is overhyped. It is not sentient, and not as smart as it seems. What we are discovering is that many of the cognitive tasks we have been doing as humans are dumber than they seem. Playing chess was more mechanical than we thought. Playing the game Go is more mechanical than we thought. Painting a picture and being creative was more mechanical than we thought. And even writing a paragraph with words turns out to be more mechanical than we thought. So far, out of the perhaps dozen of cognitive modes operating in our minds, we have managed to synthesize two of them: perception and pattern matching. Everything we’ve seen so far in AI is because we can produce those two modes. We have not made any real progress in synthesizing symbolic logic and deductive reasoning and other modes of thinking. It is those “others” that are so important because as we inch along we are slowly realizing we still have NO IDEA how our own intelligences really work, or even what intelligence is. A major byproduct of AI is that it will tell us more about our minds than centuries of psychology and neuroscience have.
While ChatGPT and its peers are impressive, they are not AI. They are a specialized branch of machine learning called Large Language Models (LLM). They generate their answers by predicting what the next token/word should be.
Like Kevin Kelly says in the text above, “we are slowly realizing we still have NO IDEA how our own intelligences really work, or even what intelligence is”. So, while we are witnessing what may seem like incredible progress in AI, it’s nothing compared with what we are yet to see.