Join our panel of experts as they explore advanced RAG (Retrieval-Augmented Generation) techniques.Â
Discover how the integration of information retrieval and generative models is enabling AI systems to generate contextually rich and coherent responses and be truly useful in production applications.
After ChatGPT took the industry by storm, everyone is now working to implement LLM-powered applications and trying to understand how to best apply this incredible innovation within their own business and to benefit their customers. One of the key issues with LLMs is hallucination, the tendency of large language models to make up responses that could be inaccurate or even completely incorrect.
In this talk I will discuss why hallucination occurs, and some of the ways to address it, such as retrieval- augmented generation (aka grounded-generation). Finally I’ll show a demo of an LLM-powered for asking questions about recent news, that uses Vectara Grounded Generation to mitigate hallucinations.
Ofer Mendelevitch leads developer relations at Vectara. He has extensive hands-on experience in machine learning, data science, and big data systems across multiple industries, and has focused on developing products using large language models since 2019.
Prior to Vectara, he built and led data science teams at Syntegra, Helix, Lendup, Hortonworks, and Yahoo! Ofer holds a B.Sc. in computer science from Technion and an M.Sc. in EE from Tel Aviv University, and is the author of "Practical Data Science with Hadoop" (Addison Wesley).