15. Hallucinations, or AI don’t do facts
Language models are completely unsuitable for use as information systems because of their pervasive ‘hallucinations’.
Summary
Large Language Models have a tendency to hallucinate and fabricate nonsense in ways that are hard to detect. This makes fact-checking and correcting the output very expensive and labour-intensive. These hallucinations appear in AI-generated summaries as well, which makes these systems unsuitable for many knowledge- and information-management tasks.
To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.
See our plans (Öffnet in neuem Fenster)
Bereits Mitglied? Anmelden (Öffnet in neuem Fenster)
Datum
25.06.2024
Kategorie
Intelligence Illusion
0 Kommentare
Möchtest du den ersten Kommentar schreiben?
Werde Mitglied von Out of the Software Crisis und starte die Unterhaltung.