15. Hallucinations, or AI don’t do facts
Language models are completely unsuitable for use as information systems because of their pervasive ‘hallucinations’.
![This one feels pensive. Only a few rectangles imply the outline of the body, she stares off into the distance.](https://assets.steadyhq.com/production/post/82c4832c-db44-4717-88b5-8270db6b4a57/uploads/images/bdcgxcmkx7/uncertain.jpg?auto=compress&w=800&fit=max&dpr=2&fm=webp)
Summary
Large Language Models have a tendency to hallucinate and fabricate nonsense in ways that are hard to detect. This makes fact-checking and correcting the output very expensive and labour-intensive. These hallucinations appear in AI-generated summaries as well, which makes these systems unsuitable for many knowledge- and information-management tasks.
To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.
See our plans (S'ouvre dans une nouvelle fenêtre)
Déjà membre ? Connexion (S'ouvre dans une nouvelle fenêtre)
Date
25/06/2024
Sujet
Intelligence Illusion
0 commentaire
Vous voulez être le·la premier·ère à écrire un commentaire ?
Devenez membre de Out of the Software Crisis et lancez la conversation.