Xiamen P. R. China Beijing P. R. China Bremen Germany Frankfurt am Main Germany Tokyo Japan Changchun P. R. China Düsseldorf Germany linkedin.com/in/xiaolishen/ [email protected] ❖ 2024/03 - present Sr. AI/ML Specialist, AI Global Black Belt Microsoft, Tokyo, Japan ❖ 2021 - 2024 Solutions Architect (focus area Machine Learning) Amazon Web Services, Tokyo, Japan ❖ 2017 - 2021 Tech Lead/Software Architect/Sr. Software Engineer Fast Retailing, Tokyo, Japan ❖ 2011 - 2016 Fullstack Application Developer/Creative Technologist Various Companies in Germany Hobbies Cello, Travel, Movies, Languages (CN, EN, JP, DE, FR)
on Azure AI Model Catalog Hugging Face Ollama NVIDIA NIM ONNX Runtime Groundbreaking quality/cost for scale Runs everywhere: GPUs, CPUs, devices Long-context and image support Phi-3-vision (4.2B)
GPT-4 Mistral Tiny Mixtral Small 60 65 70 75 80 85 Model quality (MMLU Avg) Inference cost 1K Tokens/$, Retail (Log10) B E T T E R C H E A P E R Claude-3 Opus GPT-4 Turbo Claude-3 Sonnet Claude-3 Haiku Gemini Pro Phi-3 14B Phi-3 Small Phi-3 Mini Llama-2 13B Llama-2 70B GPT-4o
coding - > 50% HumanEval, MBPP - Paper: Textbooks Are All You Need Phi-1.5 (1.3B) - Added commonsense reasoning in natural language - On-par performance on NLP tasks with models 5x larger (e.g. Llama 2-7B, Vicuna-13B) - Paper: Textbooks Are All You Need II: phi-1.5 technical report Phi-2 (2.7B) - Augmented data source - Near SOTA performance among models smaller than 13B (e.g., Llama 2-13B, Mistral-7B) - Blog: Phi-2: The surprising power of small language models Phi-3 Family - Sizes: 3.8B, 7B, 14B, 4.2B(Vision) - Context length: 4K, 128K - SOTA Open SLM with multi-modality - Paper: Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone 2023/09 Phi-1.5 2023/06 Phi-1 2024/04 Phi-3 2023/12 Phi-2 2024/06 Phi-3 Update
(from left to right: phi-1.5, phi-2, phi-3-mini, phi-3- small) versus Llama-2 family of models (7B, 13B, 34B, 70B) that were trained on the same fixed data. We plot the log of MMLU error versus the log of model size. Textbooks are (still) all you need. High quality training data improves SLMs and deviates standard scaling-laws. - Phi-1: 7B unique tokens of textbook quality code-language data - 6B deduplicated, GPT-4 filtered code data from The Stack and StackOverflow - 1B GPT-3.5 generated Python textbook data - Phi-1.5: Phi-1’s data + 20B synthetic textbook- like common sense and general knowledge - Seeded with 20K carefully selected topics - Used web samples in prompts for diversity - Phi-2 - Synthetic data specifically created to teach common sense reasoning and general knowledge - Carefully selected web data, filtered based on educational value and content quality Data Optimal Regime: focus on the quality of data for a given scale.
according to educational level - Synthetic LLM generated data Two-phase pre-training - Phase 1: General Knowledge & Language Understanding • Data: Primarily web-based, highly filtered towards textbooks quality data • Goal: Teach general knowledge and language skills - Phase 2: Logical Reasoning & Niche Skills • Data : Filtered web data (subset of Phase 1) and synthetic data • Goal: Enhance logical reasoning, math, coding and specialized skills Two-stage post-training - Stage 1: Instruction following Supervised Finetuning (SFT) • Data: curated high-quality data across various domains (math, coding, reasoning, conversation, safety) • Goal: Improve domain-specific knowledge and ability to follow user instructions in various use cases - Stage 2: Direct Preference Optimization (DPO) • Data: Preference Chat format data, reasoning, and Responsible AI (RAI) efforts • Goal: Steer model away from unwanted behavior, enhance robustness, safety, and transform into an efficient AI assistant
perform well at simple tasks Offline environments, on-device or on-prem, where local inference may be needed Latency bound scenarios where fast response times are critical Cost constrained tasks/use cases, particularly those with simpler tasks Resource constrained environments Select tasks can see improved performance via fine-tuning (vs. large model out-of-box)
same size and larger Phi-3-vision outperforms larger models such as Claude- 3 Haiku and Gemini 1.0 Pro V across general visual reasoning tasks, OCR, table and chart understanding tasks