When OpenAI launched their new software, ChatGPT, in late 2022, they included the warning that the software had a tendency to “hallucinate”, by which they meant, well, make mistakes. Hence the warning label (often ignored): use it with caution, check the quality of the output, etc.
What OpenAI was doing at the same time, consciously or not, was anthropomorphising their software, giving it the attributes and appearance of being human, of being somehow alive.
Clever marketing.
What their software developers knew was that, unless coded to act differently (e.g. as with LLMs from Chinese sources, which are restricted from politically sensitive topics), their LLM will always make a choice as to the next token. Even if the probability of that choice is very low, even 0. They knew that the everyday user would read the output as fact when it was actually not accurate. So they gave this software quirk a name (first used in Germany?) from human behaviour, “hallucination”. And began to refer to ChatGPT and other models as people.
We humans like to anthropomorphise things, whether living (our pets) or not (our possessions). To make the inanimate animate. And to extrapolate what is simply a mathematical error into something that, in our subsconscius minds, is so big, so unusual, we fear it and revere it.
Funny, that.