Large Language Models (LLMs) have changed how we design software: Instead of clicks and GUIs, natural language now dominates— not just via keyboard and text, but also by voice. In this session, you will learn how to integrate voice-enabled AI models directly into your application and control them in real time with your voice. Thanks to Gemini Live and tool calling, new possibilities for natural language interfaces are emerging, with minimal latency, bidirectional, and multilingual.
In practical demos, Christian Liebel from Thinktecture will show you how to address selected LLMs by voice, link your functionalities to them, and develop smart, conversational interfaces. This is not science fiction.
Caution: Interactive voice AIs can be addictive.