Artificial intelligence models are becoming increasingly sophisticated, capable of generating text that can occasionally be indistinguishable from that authored by humans. However, these powerful systems aren't infallible. One recurring issue is known as "AI hallucinations," where models produce outputs that are factually incorrect. This can occur