Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[CNCF Q1 2024] Agentic Installer LLMs Helm Char...

cncf-canada-meetups
April 17, 2024
29

[CNCF Q1 2024] Agentic Installer LLMs Helm Charts by Chris Gruel @Akeyless

cncf-canada-meetups

April 17, 2024
Tweet

Transcript

  1. AI DevOps Workshop: Using LLMs to Deploy Kubernetes Helm Charts

    Chris Gruel Senior Solution Architect, Akeyless
  2. Agenda The Challenge 01 Our Goal 02 The How 03

    Tool Selection: Langchain, LangGraph & Chainlit 04 Integrating Human Insights 05 Developing Conversational Agents 06 Demo: Agentic Installer in Action 07 Lessons Learned 08 Future Directions 09 Q&A 10
  3. The Challenge: Helm Chart Dependencies and the Akeyless Gateway •

    What is the Gateway? • Deployment requirements: • Connectivity • Identify cloud provider • Create appropriate auth method • Create the gateway! • Currently, manual process Applications and platforms Gateway CUSTOMER ENVIRONMENT CUSTOMER ENVIRONMENT TLC Gateway
  4. Our Goal: Use Function-Calling in LLMs to create Agentic Behavior

    What is agentic behavior? • Agent = autonomous entity that can perform tasks on its own or with human supervision • Agentic behavior = the agent does the work for us!
  5. The How 1. Create the OpenAI agent as an expert-pair

    programmer 2. Use agent for code structure of the code 3. Use Cursor IDE to fill out the details
  6. Tool Selection: Langchain and LangGraph • Langchain: built to abstract

    LLM complexities, and for connecting to data sources • Langchain Benefits • Abstraction • Flexibility • Monitoring • LangGraph: for cyclical workflows • => Create Human-in-the-Loop for chat interface
  7. Choosing Chainlit for UI • Streamlit: difficult to get full

    interactivity for chat • Chainlit is built for this specific use case • Lesson: It’s a fast-moving industry. If there’s a better solution, switch!
  8. Integrating Human Insights Give the agent the ability to ask

    questions: 1. Gather information 2. Handle exceptions 3. Give us aggregated data based on its questions and the answers it receives
  9. Developing Conversational Agents 1. Build asynchronous graphs: list of tools

    at LLM’s disposal 2. Purpose-built functions – minimize complexities for agent 3. Akeyless SDK is easy to use, but don’t want agent to worry about token 4. Minimize number of inputs for agent 5. We don’t want to send a token to OpenAI: the LLM model shouldn’t have actual credentials
  10. Lessons Learned 1. Embrace an AI-powered IDE 2. Build an

    OpenAI assistant, load the relevant knowledge and use the assistant to output pseudo-code as a start, to be filled out with Cursor IDE. Benefit from the better context information built into the IDE itself! 3. Don’t waste time on a dead end not suited for purpose!
  11. Future Directions 1. Deploy K8s Secret Injector 2. Detect if

    external secrets operator already in cluster >> install integration 3. Detect if Nginx ingress is installed in cluster, and offer to deploy with Nginx ingress 4. Self-service POC: deploy gateway, database, set up secrets… 5. Incorporate ALL public Akeyless documentation and enable RAG for instant support help