Saltar para o conteúdo principal

15. Hallucinations, or AI don’t do facts

Language models are completely unsuitable for use as information systems because of their pervasive ‘hallucinations’.

This one feels pensive. Only a few rectangles imply the outline of the body, she stares off into the distance.

Summary

Large Language Models have a tendency to hallucinate and fabricate nonsense in ways that are hard to detect. This makes fact-checking and correcting the output very expensive and labour-intensive. These hallucinations appear in AI-generated summaries as well, which makes these systems unsuitable for many knowledge- and information-management tasks.

To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.

See our plans (Abre numa nova janela)

Tópico Intelligence Illusion

0 comentários

Gostaria de ser o primeiro a escrever um comentário?
Torne-se membro de Out of the Software Crisis e comece a conversa.
Torne-se membro