Staff Developer Advocate for Snyk, Java Champion, Oracle Ace Pro, and Software Engineer with over a decade of hands-on experience in creating and maintaining software. He is passionate about Java, (Pure) Functional Programming and Cybersecurity. Brian is a JUG leader for the Virtual JUG and the NLJUG. He also co-leads the DevSecCon community and is a community manager for Foojay. He is a regular international speaker on mostly Java-related conferences like JavaOne, Devnexus, Devoxx, Jfokus, JavaZone and many more. Besides all that, Brian is a military reserve for the Royal Netherlands Air Force and a Taekwondo Master / Teacher.
As developers, we’re embracing AI and large language models (LLMs) in our applications more than ever. However, there’s an increasing concern we need to be aware of: prompt injection. This sneaky attack can undermine our AI systems by manipulating the input to produce unintended outputs.
In this session, we’ll break down what prompt injection really means and look at some common techniques attackers use, like instruction overrides and hidden prompts. But we won't stop there; we’ll also explore advanced challenges, including escalation techniques that can exacerbate the risks.
Most importantly, we won’t just identify the problem. We’ll dive into practical steps you can take to mitigate these risks and keep your AI interactions secure. Join us at Devoxx UK to gain insights that will help you stay ahead in AI security and ensure your applications remain robust against these emerging threats.
Searching for speaker images...
