Speaker

Alexey Naiden
Cloud Linux

Alexey is a Staff Engineer at CloudLinux, building CLI tools and infrastructure that help teams ship across Java, Python, and .NET codebases. His background spans platform security at Apple, data engineering at Amazon, and ML infrastructure at Neu.ro — a thread that led naturally to his current work on LLM-powered developer tools. He's particularly interested in the messy engineering problems that emerge when you put AI agents into real production pipelines: liveness, failure containment, and treating prompts like code. Outside of work, Alexey runs an unreasonable number of self-hosted services and is always one Zigbee device away from home automation perfection.

View
Patterns for Developer Tools That Think
Byte Size (BEGINNER level)
Room B

We built a CLI that hands an unfamiliar Java project to an AI agent and tells it to figure out the build. It actually works — but getting there meant solving problems nobody warned us about. The agent runs a Maven build that takes two hours and the SDK gives you total silence the entire time. Your user thinks the process is hung. When the agent crashes halfway through, you've got a staging repository full of half-deployed artifacts to clean up. And the prompts that teach the agent what to do turn out to need the same versioning and review discipline as any other code in your repo.

This talk covers three patterns we extracted from building LLM-powered developer tools in production: wrapping AI agents in step-based pipelines so failures stay contained, adding heartbeat liveness so your users don't kill a process that's quietly working, and treating prompt "skills" as engineered artifacts rather than magic strings.

More

Searching for speaker images...