In the short time that large language models have gone mainstream, we’ve started to see several patterns that AI researchers and developers have used to build their products. These include things like prompt engineering, prompt templating, chain of thought, vectorized memory and embeddings, and more. They’ve enabled new forms of natural language-based apps and have truly opened people’s imagination to what is possible with AI.
This presentation will cover a few of these recipes and how to build them using the Semantic Kernel, Microsoft’s latest open-source framework for developing AI and large language model applications. The Semantic Kernel incorporates the design patterns from the latest in AI research, and provides a multilingual SDK that developers can use to empower their apps with recursive reasoning, contextual memory, semantic indexing, planning, retrieval-augmented generation, and more. Come cook with us!
Currently, I’m building next-generation AI products at Microsoft through the Office of the CTO with an emphasis on multimodal and large language models.
Over the last decade, I’ve successfully launched applied ML products across industries in health, gaming, finance, defense, energy, autonomous vehicles, semiconductors, and enterprise tech.
I’ve held product management roles at SambaNova Systems and C3.ai building out ML Platforms and deep learning offerings to enterprise customers. I continue to be an active ML practitioner and was the first data science hire at Uber’s Advanced Technologies Group working on building large scale computer vision and sensor fusion algorithms for self-driving cars.
I’m motivated to help raise the next generation of data scientists and product leaders and have taught and mentored at places like General Assembly, Springboard, and Product School to equip students for their future success. Proud alum of UC Berkeley! Go Bears!