fbpx

Ofer Mendelevitch

Developer Relations
Vectara

Presentation Title:

Why do LLMs hallucinate?

Presentation Summary:

Atom icon for The AI Conference 2023, a groundbreaking two-day event on AGI, LLMs, Infrastructure, Alignment, AI Startups, and Neural Architectures.After ChatGPT took the industry by storm, everyone is now working to implement LLM-powered applications and trying to understand how to best apply this incredible innovation within their own business and to benefit their customers. One of the key issues with LLMs is hallucination, the tendency of large language models to make up responses that could be inaccurate or even completely incorrect.

Brain icon for The AI Conference 2023, a groundbreaking two-day event on AGI, LLMs, Infrastructure, Alignment, AI Startups, and Neural Architectures.In this talk I will discuss why hallucination occurs, and some of the ways to address it, such as retrieval- augmented generation (aka grounded-generation). Finally I’ll show a demo of an LLM-powered for asking questions about recent news, that uses Vectara Grounded Generation to mitigate hallucinations.

ABOUT | Ofer Mendelevitch

Ofer Mendelevitch leads developer relations at Vectara. He has extensive hands-on experience in machine learning, data science and big data systems across multiple industries, and has focused on developing products using large language models since 2019. Prior to Vectara he built and led data science teams at Syntegra, Helix, Lendup, Hortonworks and Yahoo! Ofer holds a B.Sc. in computer science from Technion and M.Sc. in EE from Tel Aviv university, and is the author of "Practical data science with Hadoop" (Addison Wesley).