Lab) is Lablup's new AI-native application • Integration with Backend.AI Continuum working al ongside models powered by Backend.AI • Easy access to AI infrastructure, Making AI models s imple and accessible for all users to utilize effortlessly
with AI capabilities natively • AI as a first-class citizen in every layers • Natural language and visual inputs as primary user i nput • Application is consists of • Frontend: Real-time AI feedback • APIs: Request routing to inference services • LLM: Inference & Model Serving
of generated conten t created by the AI, that is presented in a way outside the main chat flow • Concrete, executable deliverables objects like docum ents, photo, video, audio, code, and more • Claude Artifacts, Google AI Studio and AI:DOL
used in building AI:DOL • Next.js • Frontend web app and API Routes direct communication with LLMs • AI SDK v5 • Standardized communication with LLM models • Support for streaming, token management, error handling • AI Elements • React components designed for AI applications • Provides Chat UI, Artifacts rendering, etc. • Backend.AI Continuum • Unified endpoint for multiple models. • Zero-downtime service, load balancing, traffic distribution
best way of communicating detailed requ irements using natural language instead of technical parameters • Express Complex Ideas Simply, complex requireme nts in natural language to generate artifacts • Direct AI Access, chat provides the most intuitive w ay to test and utilize AI model capabilities • Instant Interaction, get immediate responses and it erate on ideas through conversational commands
AI SDK v5 to render me ssage with React. • Use `messages` in useChat hook • Represents the full application state needed for UI r endering. • `parts` property allowing for rich content like text, r easoning, tools • TextUIPart, ReasoningUIPart, SourceUrlUIPart, SourceDocu mentUIPart, FileUIPart, DataUIPart, StepStartUIPart • Custom user message
transform th e UX from "loading spinner for 30+" to "watching res ponse in real-time”. • Responses should appear natural to the user • Tune AISDK options for streaming • StreamTextTransform(line, word) • experimental_throttle • Ready for rendering incomplete markdown • Multiple rendering happened in code highlighting • Streamdown, AI Elements powered
conversation flowing with t he user, while tool stream handles tool results • Main LLM Stream (User-Facing) • User communication only • Deciding when to call tools, description of tool usage • Tool Stream Responsibilities • Executing the tool properly (with other models) • Managing separate data streams, onData • Use clear prompts for both models • “A document has been created and is now visible to the user”
for code generati on • Transpiling and executing JavaScript /TypeScript cod e generated by AI models • Monaco Editor and `importmap` • Running JS code in Node.js environment and Pytho n with external packages • CodeContainer (WebContainer and pyodide) • Updating codes directly on the AI:DOL
also generat e images via prompts • Image Model allows us to generate images using pr ompt • OpenAI Dall-E, Google Gemini 2.5 Flash Image, and more • Saving blob image data on the server, and show it • Consider image rendering with partial images
content, ex) Video, Audio Tool-calling with Human in the loop MCP in sandbox Code-sandbox on the sever-side Context compacting Quota management Workflow pipelines Working and testing with more opened models