Talk

AI technologies, and particularly large language models (LLMs), have been popping up like mushrooms lately. But how can you use them in your applications? 
  
In this workshop, we will use a chatbot to interact with GPT-4 and implement the Retrieval Augmented Generation (RAG) pattern. Using a vector database, the model will be able to answer questions in natural language and generate complete, sourced responses from your own documents. To do this, we will create a Quarkus service based on the open-source LangChain4J and ChatBootAI frameworks to test our chatbot. Finally, we will deploy everything to the Cloud. 
  
After a short introduction to language models (operation and limitations), and prompt engineering, you will:
  • Create a knowledge base: local HuggingFace LLMs, embeddings, a vector database, and semantic search 
  • Use LangChain4J to implement the RAG (Retrieval Augmented Generation) pattern 
  • Create a Quarkus API to interact with the LLM: OpenAI / AzureOpenAI
  • Use ChatBootAI to interact with the Quarkus API
  • Improve performance thanks to prompt engineering
  • Containerize the application
  • Deploy the containerized application to the Cloud
  
At the end of the workshop, you will have a clearer understanding of large language models and how they work, as well as ideas for using them in your applications. You will also know how to create a functional knowledge base and chatbot, and how to deploy them in the cloud.
Antonio Goncalves
Microsoft
Antonio is a Principal Software Engineer at Microsoft living in Paris. He evolves in the Jakarta EE, Spring, Micronaut, Quarkus and now AI landscapes. From distributed systems to microservices and the cloud, today he helps his customers to develop the architecture that suits them the best.
Aside from working for his customers, Antonio writes books (Java EE and Quarkus), creates online courses, talks at international conferences (Devoxx, JavaOne, GeeCon…), writes technical papers, articles and co-presents on the Technical French pod cast called Les Cast Codeurs. He has also co-created the Paris JUG, Voxxed Microservices and Devoxx France. For all his work for the community Antonio has been made Java Champion.
Yohan Lasorsa
Microsoft
Open-source enthusiast and software craftsman, the web is the ultimate playground for Yohan. With a background of 15+ years in various fields such as applied research on mobile and IoT, architecture consulting and cloud applications development, he worked all the way down to the low-level stacks before diving into web development. As a full stack engineer and DIY hobbyist, he now enjoys pushing bits of JavaScript everywhere he can while sharing his passion with others.
Sandra Ahlgrimm
Microsoft
Sandra Ahlgrimm is a Senior Cloud Developer Advocate at Microsoft who specializes in Java and AI. She is passionate about helping developers deploy their Java workloads on Azure with ease and efficiency. Sandra and her team, the Java Advocates, work closely with product teams and developers to ensure that Azure services are tested and optimized for developers’ needs. They also drive awareness and provide education to the community on the capabilities of these services.
Sandra’s work is focused on enabling developers to take full advantage of the platform’s capabilities. She is an expert in deploying Java workloads on Azure, whether it’s through App Service, AKS, Azure Spring Apps, Azure Functions, or Azure Container Apps. Her expertise in AI and machine learning helps developers to build intelligent applications that can scale with ease on Azure.
Sandra is a passionate advocate for the developer community and is always happy to chat about tech-related news and issues. You can connect with her on LinkedIn or Twitter.