Thoughts on life, work and the universe. Sometimes writing in English og noen ganger på norsk.

AI is not hallucinating

We say that AI is hallucinating when it goes on generating half-truths, or things that are plain wrong.

When our kids come home telling us something they have heard, read or have thought up on their own, and we know it to be wrong, do we then tell them that they are hallucinating?

Hallucination is defined as “to affect with visions or imaginary perceptions”, “perceive what is not there”. There is a big gap between hallucinating and presenting something either based on misinformation, or the lack of knowledge and information.

With kids, it’s the latter. They have not fully developed, they have not learned everything there is to know, and the nature of their mind is to explore, think and come up with explanations for how they see the world around them.

We would never say that our kids are hallucinating.

Saying LLMs are hallucinating is granting the technology properties they do not have. They do not have a sentient mind able to go off on its own to produce thoughts, produce visual perceptions. The models are just incorrect. The algorithms are not able to put the data together correctly, and they need further training and development to be able to produce correct results. That’s it. There is no hallucination, it’s an incorrect representation of the data available.

Let’s not assign the LLMs sentient properties to excuse the current state of the models. They are not perfect, that is fine. That means we need to further develop and refine. The last thing we should do is excuse them for hallucinating and thinking that is acceptable.