Passer au contenu principal

15. Hallucinations, or AI don’t do facts

Language models are completely unsuitable for use as information systems because of their pervasive ‘hallucinations’.

This one feels pensive. Only a few rectangles imply the outline of the body, she stares off into the distance.

Summary

Large Language Models have a tendency to hallucinate and fabricate nonsense in ways that are hard to detect. This makes fact-checking and correcting the output very expensive and labour-intensive. These hallucinations appear in AI-generated summaries as well, which makes these systems unsuitable for many knowledge- and information-management tasks.

To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.

See our plans (S'ouvre dans une nouvelle fenêtre)

Sujet Intelligence Illusion

0 commentaire

Vous voulez être le·la premier·ère à écrire un commentaire ?
Devenez membre de Out of the Software Crisis et lancez la conversation.
Adhérer