Is Java suitable for low-latency programming where every microsecond counts? This deep-dive session proves that Java can compete with C/C++ in the high-performance arena when properly optimized.
Drawing from real-world experience building financial trading systems and high-frequency applications, we'll explore battle-tested techniques for achieving sub-10 microsecond response times in Java applications. You'll learn how the JVM actually works under the hood - from JIT compilation phases to memory layouts - and how to leverage this knowledge for maximum performance.
Key topics covered include:
*) Understanding and optimizing Java's memory model (heap, stack, metaspace) for minimal GC impact
*) Advanced heap management strategies: object pooling, canonical objects, lazy initialization, and using primitive collections
*) Thread synchronization techniques that avoid performance penalties: lock-free programming with CAS operations, avoiding false sharing, and thread affinity
*) Leveraging specialized libraries like LMAX Disruptor for inter-thread messaging, Chronicle Map for ultra-fast key-value storage, and Chronicle Queue for microsecond-latency message passing
Through practical examples and code demonstrations, you'll discover how to reduce object allocation, minimize synchronization overhead, and structure your applications for consistent low-latency performance.
Drawing from real-world experience building financial trading systems and high-frequency applications, we'll explore battle-tested techniques for achieving sub-10 microsecond response times in Java applications. You'll learn how the JVM actually works under the hood - from JIT compilation phases to memory layouts - and how to leverage this knowledge for maximum performance.
Key topics covered include:
*) Understanding and optimizing Java's memory model (heap, stack, metaspace) for minimal GC impact
*) Advanced heap management strategies: object pooling, canonical objects, lazy initialization, and using primitive collections
*) Thread synchronization techniques that avoid performance penalties: lock-free programming with CAS operations, avoiding false sharing, and thread affinity
*) Leveraging specialized libraries like LMAX Disruptor for inter-thread messaging, Chronicle Map for ultra-fast key-value storage, and Chronicle Queue for microsecond-latency message passing
Through practical examples and code demonstrations, you'll discover how to reduce object allocation, minimize synchronization overhead, and structure your applications for consistent low-latency performance.
Stefan Angelov
Tradu
Stefan Angelov is an Engineering Manager at Tradu and a seasoned Software Architect with over 10 years of experience building high-performance, scalable systems in the fintech and gaming industries. He has held architecture and leadership roles at companies including Trading 212, Nexo, and PokerStars, where he designed distributed systems handling millions of transactions.
Stefan is passionate about Java, Spring Boot, Apache Kafka, and cloud-native architectures on AWS. Beyond his day job, he co-founded Hacker4e, an academy teaching kids to code through Scratch and web developmentâproof that he believes great software starts with great fundamentals.
A member of the FinOps Foundation community and a regular on the tech conference circuit, Stefan enjoys sharing hard-won lessons on system design, event-driven architecture, and engineering leadership.
Stefan is passionate about Java, Spring Boot, Apache Kafka, and cloud-native architectures on AWS. Beyond his day job, he co-founded Hacker4e, an academy teaching kids to code through Scratch and web developmentâproof that he believes great software starts with great fundamentals.
A member of the FinOps Foundation community and a regular on the tech conference circuit, Stefan enjoys sharing hard-won lessons on system design, event-driven architecture, and engineering leadership.
