Video: https://www.youtube.com/watch?v=K_Y9wvGjNKw
Large Language Models (LLMs) and in-context learning have introduced a new paradigm for developing natural language understanding systems: prompts are all you need! Prototyping has never been easier, but not all prototypes give a smooth path to production. In this talk, I'll share the most important lessons we've learned from solving real-world information extraction problems in industry, and show you a new approach and mindset for designing robust and modular NLP pipelines in the age of Generative AI.
Breaking down larger business problems into actionable machine learning tasks is one of the central challenges of applied natural language processing. I will walk you through example applications and practical solutions, and show you how to use LLMs to their fullest potential, how and where to integrate your custom business logic and how to maximize efficiency, transparency and data privacy.
https://explosion.ai/blog/human-in-the-loop-distillation
This blog post presents practical solutions for using the latest state-of-the-art models in real-world applications and distilling their knowledge into smaller and faster components that you can run and maintain in-house.
https://explosion.ai/blog/sp-global-commodities
A case study on S&P Global’s efficient information extraction pipelines for real-time commodities trading insights in a high-security environment using human-in-the-loop distillation.
https://explosion.ai/blog/gitlab-support-insights
A case study on GitLab’s large-scale NLP pipelines for extracting actionable insights from support tickets and usage questions.
https://explosion.ai/blog/applied-nlp-thinking
This blog post discusses some of the biggest challenges for applied NLP and translating business problems into machine learning solutions, including the distinction between utility and accuracy.
https://ines.io/blog/window-knocking-machine-test/
How will technology shape our world going forward? And what tools and products should we build? When imagining what the future could look like, it helps to look back in time and compare past visions to our reality today.
https://prodi.gy/docs/large-language-models
Prodigy comes with preconfigured workflows for using LLMs to speed up and automate annotation and create datasets for distilling large generative models into more accurate, smaller, faster and fully private task-specific components.