Talks

Observability is the ability to measure the current state of a system. The rapid emergence of LLMs and GenAI applications used in production scenarios means we need tools to capture not only logs of our applications, but tracing and metrics to help us understand the usage and errors coming back from LLMs within our application ecosystem.

Join me as I dive into best practices in observing production applications using LLMs. I'll cover an example instrumenting an AI agent application written in TypeScript using OpenLit to generate OpenTelemetry signals, and the data we can capture to help us identify and remediate common issues in production GenAI applications.
Carly Richmond
Elastic
Carly is Developer Advocate Lead at Elastic, based in London, UK. Before joining Elastic in 2022, she spent over 10 years as a software engineer at a large investment bank, specialising in front-end web development and agility. She is a UI developer who dabbles in writing backend services, a speaker, and a regular blogger.

She enjoys cooking, photography, drinking tea, and chasing after her young son in her spare time.