Talks

Making GraalVM work with the entire JVM ecosystem has historically meant one thing: manual, painful configuration of reflection metadata. We decided to automate this by generating comprehensive test suites for over 1000 key JVM libraries using AI and collecting the reflection metadata along the way. We learned that "asking the LLM" is not enough.

In this session, we dive deep into the architecture of an autonomous test generation pipeline. We move beyond simple prompts and demonstrate a feedback-loop architecture where GraalVM analysis and coverage data directly guide AI agents to uncover hidden code paths and edge cases.

You will learn:
  • The Architecture: How to orchestrate agents to minimize token costs while maximizing coverage.
  • The Feedback Loop: Techniques for injecting compile-time and runtime metrics back into the context window to increase coverage.
  • The Benchmark Results: A transparent look at the cost (USD), time, and code coverage of different techniques and state-of-the-art models (GPT-5 vs. Open Source) when applied to real-world Java libraries.

Join us for a data-driven journey into the future of automated compatibility testing.
Vojin Jovanovic
Oracle
Vojin is an engineer at Oracle, where he leads projects in GraalVM developer experience and ML-based static profiling. Beyond his core work, he explores orchestration of AI agent teams to automate complex tasks. He holds a PhD in embedded domain-specific languages from the Scala Lab at EPFL.