Talks

In this session I will share best practices for using MCP setups and AI Agents in real development workflows 🤖. The goal is to improve both quality and security, since even with all the excitement around these tools, it’s still very easy to create confusing interactions, accidental data exposure, or Agents that trigger actions we never intended.

I will also present reports comparing how different LLM models generate code, showing clear differences in stability, output consistency, and the amount of guidance each model needs to avoid mistakes. Along with that, I’ll bring recent figures showing the surprisingly low productivity gains many teams report when adopting AI tools—often just small improvements instead of the big jumps everyone expects. We will look at why this happens: unclear prompts, weak tool design, loose permissions, and wrong assumptions about what the models can really handle.

The talk focuses on practical habits that developers can apply right away: tightening tool access, simplifying and focusing context, adding lightweight validation steps, and using safety patterns that prevent the common failures. My goal is to help teams build more reliable, productive, and secure AI workflows, making AI code generation a real benefit rather than a risky experiment 😅.
Jonathan Vila López
Sonar
International Speaker, JavaChampion, Cofounder of JBCNConf and DevBcn conferences in Barcelona, and AI4Devs conference in Amsterdam.

Currently working as a Staff Developer Advocate in Java at Sonar (SonarQube), focused on Code Quality, Dev Productivity, AI & Security.

I have worked as a (paid) developer for more than 30 years ago using multiple languajes, but for the last 15 using Java. Although I started when I was 14 with my Amstrad CPC 6128 🙂

I am very interested in simulated reality, psychology, philosophy, and Java.