Infuse Intelligence into your Apps with Foundry Local
The deck is part of the presentation and live demo conducted at the Melb.Net July meetup in Melbourne on 1 July 2025. It covers the capabilities of Foundry Local using a fun app which impersonates celebrities
and optimized with hardware vendors Mac: GPU Acceleration on Apple Silicon Foundry Local Management Service Download and run models at runtime Foundry CLI & SDK CLI: Manage models, tools & agents SDK: Integrate and interact with model management and local inference Local AI Agents using MCP Call local tools for smart automations
the model C# SDK Manages Foundry model OpenAI client Connects to actual model and performs chat completion operations Foundry Local Service Programmatic access to Model cache and catalog
through SDK, API endpoints or CLI. On Device Inference Run models locally on your own hardware, reducing costs while keeping all your data on your device. Model Customization Use preset models or your own models to meet specific requirements. Cost Efficiency Make AI more accessible by eliminating cloud service costs.
your device. Limited or no internet connectivity Reduce Cloud inference costs Low latency AI responses for real time applications Experimentation Experiment with AI models before deploying to cloud environments!