On A.I.
Voltaire joked that the Holy Roman Empire was “neither holy, nor Roman, nor an empire.“
It’s unfortunate that we call this family of technologies “artificial intelligence“. While they are certainly artificial, they are nowhere near intelligent.
The latest of these technologies is something called a Large Language Model or LLM. An LLM is… a pattern match. It is a way to assemble sentences and paragraphs that look like an answer in response to sentences and paragraphs that look like questions. They depend entirely upon their input data, known as their training data.
Michael Creighton described a phenomenon that he called “Murray Gell-Mann Amnesia”, after a friend who described it to him. It is the phenomenon where you read an article on a subject with which you are familiar and think, “The writer doesn’t know what they’re talking about“. You then go on to the next article and accept it uncritically. I think something similar happens when people read the outputs of these models.
I have read claims that these answers would be acceptable from undergraduates, but not graduate students. Either these professors are not very good teachers, or the quality of undergraduates has declined, and they are not willing to fail most of the class.
The software does not return answers. It returns strings of characters which resemble answers.
The biggest danger with the current technology is that stupid, credulous, or lazy humans will blindly accept the “answers“ these models return.

