< Back
A System for Minimizing LLM Hallucinations
June 5, 3:05 PM - 3:25 PM
Grand Ballroom Salon B
When it comes to LLMs, hallucinations are a fact of life. In order to develop trustworthy AI, teams need to proactively find and fix hallucinations, at scale. This starts with evaluations.
Join Galileo's Atin Sanyal as he discusses hallucinations. He'll cover how they occur, how to monitor for hallucinations, and how teams can set up an end-to-end system for GenAI hallucination.
Play Session
About the speaker
Play Interview
Atin Sanyal
Co-Founder and Chief Technology Officer, Galileo