Speaker Details

Yohan Lasorsa

Open-source enthusiast and software craftsman, the web is the ultimate playground for Yohan. With a background of 15+ years in various fields such as applied research on mobile and IoT, architecture consulting and cloud applications development, he worked all the way down to the low-level stacks before diving into web development. As a full stack engineer and DIY hobbyist, he now enjoys pushing bits of JavaScript everywhere he can while sharing his passion with others.

Build your own enterprise ChatGPT with OpenSource
Mini Lab (INTERMEDIATE level)
AI technologies, and particularly large language models (LLMs), have been popping up like mushrooms lately. But how can you use them in your applications? 
In this workshop, we will use a chatbot to interact with GPT-4 and implement the Retrieval Augmented Generation (RAG) pattern. Using a vector database, the model will be able to answer questions in natural language and generate complete, sourced responses from your own documents. To do this, we will create a Quarkus service based on the open-source LangChain4J and ChatBootAI frameworks to test our chatbot. Finally, we will deploy everything to the Cloud. 
After a short introduction to language models (operation and limitations), and prompt engineering, you will:
  • Create a knowledge base: local HuggingFace LLMs, embeddings, a vector database, and semantic search 
  • Use LangChain4J to implement the RAG (Retrieval Augmented Generation) pattern 
  • Create a Quarkus API to interact with the LLM: OpenAI / AzureOpenAI
  • Use ChatBootAI to interact with the Quarkus API
  • Improve performance thanks to prompt engineering
  • Containerize the application
  • Deploy the containerized application to the Cloud
At the end of the workshop, you will have a clearer understanding of large language models and how they work, as well as ideas for using them in your applications. You will also know how to create a functional knowledge base and chatbot, and how to deploy them in the cloud.