Talks

Deploying Large Language Models (LLMs) in enterprise environments demands more than just frontier models: it requires robust guardrails to ensure safety, compliance, and ethical AI usage. Without proper safeguards, LLMs can generate harmful content, bypass security constraints, or introduce regulatory risks.

Join us as we explore how to integrate AI safety frameworks into your applications using tools like Granite Guardian, Llama Guard, Safety Checker, IBM Risk Atlas, TrustyAI, and others. We'll live demo how these open source solutions detect and mitigate risks, ensuring that AI systems remain trustworthy and aligned with enterprise requirements. From filtering harmful prompts before they reach the LLM, to preventing authorized agentic behavior, and risk detection, you'll learn how to integrate these safeguards within Kubernetes, creating scalable, policy-driven protections that adapt to evolving AI risks.
Roberto Carratala
Red Hat
Roberto is a Principal AI Architect working in the AI Business Unit specializing in Container Orchestration Platforms (OpenShift & Kubernetes), AI/ML, DevSecOps, and CI/CD. With over 10 years of experience in system administration, cloud infrastructure, and AI/ML, he holds two MSc degrees in Telco Engineering and AI/ML.