15. Hallucinations, or AI don’t do facts
Language models are completely unsuitable for use as information systems because of their pervasive ‘hallucinations’.
Summary
Large Language Models have a tendency to hallucinate and fabricate nonsense in ways that are hard to detect. This makes fact-checking and correcting the output very expensive and labour-intensive. These hallucinations appear in AI-generated summaries as well, which makes these systems unsuitable for many knowledge- and information-management tasks.
To read this post you'll need to become a member. Members help us fund our work to ensure we can stick around long-term.
See our plans (Opens in a new window)
Already a member? Log in (Opens in a new window)
Date
June 25, 2024
Topic
Intelligence Illusion
0 comments
Would you like to be the first to write a comment?
Become a member of Out of the Software Crisis and start the conversation.