< Back

A System for Minimizing LLM Hallucinations

June 5, 3:15 PM - 3:45 PM
Grand Ballroom Salon B

When it comes to LLMs, hallucinations are a fact of life. In order to develop trustworthy AI, teams need to proactively find and fix hallucinations, at scale. This starts with evaluations.

Join Galileo's Atin Sanyal as he discusses hallucinations. He'll cover how they occur, how to monitor for hallucinations, and how teams can set up an end-to-end system for GenAI hallucination.

About the speaker

Atin Sanyal

Atin Sanyal

Co-Founder and Chief Technology Officer, Galileo