Definition
Erroneous output from AI systems (especially LLMs) where the model generates convincing-sounding but factually incorrect information. An important risk with generative AI that users must be informed about as part of AI literacy.
Definition & Explanation
Erroneous output from AI systems (especially LLMs) where the model generates convincing-sounding but factually incorrect information. An important risk with generative AI that users must be informed about as part of AI literacy.