When introducing large language models (LLMs) in enterprises, lacking appropriate guardrails is like driving at high speed without a seatbelt—it's unnoticeable most of the time, but once an incident occurs, it can be difficult to recover. This presentation will combine common risks observed by the speaker in enterprise consulting practices, including sensitive information leakage, uncontrolled model outputs, and hallucinations. It will further explore how to design and implement scalable validator structures using open-source tools such as Python, Guardrails.ai, and LiteLLM. In addition to basic implementation details, it will also explain how to consider risk control and cost-effectiveness balance when introducing guardrails into production environments (Ready-to-Production), assisting enterprises in building LLM application architectures that are secure and resilient while being scalable. It is recommended that participants have some experience with LLM applications and Python development for a quicker understanding of the context and topics discussed.
By Nero Un
A developer from Macao, currently serving as a Consultant at IBM, with practical expertise in data science, data engineering, and artificial intelligence. Graduated from Kaohsiung Medical University and holds a master’s degree in Medical Informatics from National Cheng Kung University. Previously served as an R & D engineer and TPM at a biomedical startup. Passionate about exploring technology to drive transformative change, and firmly believes in the power of technology to influence and reshape the world.