Cedric Clyburn (@cedricclyburn), Senior Developer Advocate at Red Hat, is an enthusiastic software developer with a background in Kubernetes, DevOps, and container tools. Focused on open-source software, he both contributes (e.g., Podman, vLLM) and enjoys speaking, with prior experience at Devoxx, WeAreDevelopers, The Linux Foundation, and more. Cedric also spends (too much) time creating video and written content helping developers learn new topics in emerging technologies, with over 2M+ views online. He’s based in New York City and is an organizer of the local Kubernetes Community Day.
Most of us like to do local development. It means we’re in control of any dependencies, network issues/latency, configurations and cost. And we love to say that “It works on my machine” don’t we? 🙂
Now, with the advent of AI-driven development, it has become more challenging than ever before to do pure local development.
In this session we’ll take a look at various solutions to continue to do local, network-optional development, even with AI:
* Run AI models locally.
* Evaluate different code assistants that can work with these locally running models
* Explore how to infuse AI capabilities from local models into our code
* Compare models from different vendors; and with different sizes to find the right balance between performance and accuracy
* Weigh pros and cons of using local vs remote models
Come to this session to learn more about local development while leveraging AI to optimize our development flow, code and our functionality itself!
Thanks to open source, in the past year, we’ve seen a fundamental change: developers and enterprises are moving away from proprietary, closed-source models. To save costs, prioritize privacy, and allow for customization, they are building, testing, and deploying their own open models. However, this journey can feel overwhelming. Which foundation model should I use? How do I connect my model to existing data sources or build agentic capabilities to start seeing real value with AI, especially in an already existing Java application?
The key to navigating this emerging path is adopting the flexibility, transparency, and collaboration of open source that many of us are familiar with. We'll walk through the critical aspects of AI feature implementation using LangChain4J, also showing observability (OpenTelemetry), testing (Promptfoo), CI/CD (Tekton) and more. Join us as we get hands-on with language models and use open technologies to control our own AI journey!
Searching for speaker images...
