In this video, we explore the concept of Hallucination in the context of Natural Language Processing and Generation.
We go through examples of ChatGPT Hallucinations.
ChatGPT Hallucinations are critical because a lot of people take its responses as facts and at its face value without validating and evaluating if in fact they are factually correct.
OpenAI's human evaluations on our API prompt distribution, and find that InstructGPT makes up facts (“hallucinates”) less often, and generates more appropriate outputs.
References:
https://openai.com/blog/instruction-following/
Reducing Quantity Hallucinations in Abstractive Summarization https://aclanthology.org/2020.findings-emnlp.203/
30 Comments