Upgrade to Pro — share decks privately, control downloads, hide ads and more …

apidays Paris 2024 - The Power of Generative AI...

apidays
December 22, 2024

apidays Paris 2024 - The Power of Generative AI in API Product Testing and Validations, Anubha Gaur, Quest Diagnostics

The Power of Generative AI in API Product Testing and Validations
Anubha Gaur, Executive Director, DevSecOps and API Management at Quest Diagnostics

apidays Paris 2024 - The Future API Stack for Mass Innovation
December 3 - 5, 2024

------

Check out our conferences at https://www.apidays.global/

Do you want to sponsor or talk at one of our conferences?
https://apidays.typeform.com/to/ILJeAaV8

Learn more on APIscene, the global media made by the community for the community:
https://www.apiscene.io

Explore the API ecosystem with the API Landscape:
https://apilandscape.apiscene.io/

apidays

December 22, 2024
Tweet

More Decks by apidays

Other Decks in Programming

Transcript

  1. The Power of Generative AI in API Product Testing and

    Validations Anubha Gaur Executive Director @Quest Diagnostics Platform Engineering (API Management/DevSecOps/SRE/QE)
  2. Intro Executive Director, Platform Engineering, Quest Diagnostics • Driving cloud

    innovation through DevSecOps, SRE, and Quality Engineering for scalable, resilient API systems. • Passionate about API Products and cloud optimization to reduce operational overhead and unlock business potential. Whenever time permits, I enjoy reading and exploring new places. Connect with me on LinkedIn: [anubha-gaur]
  3. Quest is the leader in diagnostic testing insights and services

    Serves 50% of US hospitals and physicians Serves 1/3 of the US adult population annually and 50% within 3 years >60B patient data points ~13,600 phlebotomists across ~7,300 access points ~4,900 Med Techs ~ 600 MDs and PhDs ~9,400 health and wellness professionals ~ 4,000 couriers and 20 aircraft 26 regional and esoteric labs
  4. The Power of Generative API in API Product Testing and

    Validations Agenda: 1. API Product Lifecycle 2. Challenges in API Product Testing 3. Understanding Generative AI and adoption approach 4. The Role of Generative AI in API Product Testing 5. Challenges and Considerations 6. Key Takeaways and Action Plan
  5. The challenges in API Product Testing 1 2 Manual test

    creation is slow and error-prone. 3 Difficult to identify edge cases Increasing API Complexity 4 5 Limited time for comprehensive testing 6 API Performance Validation Gaps in test data coverage and documentation.
  6. Inefficiencies in API Testing Process: 1 2 3 4 5

    Test Case Generation is manual. Missing test cases (positive and negative) Extensive effort is needed in Test Data generation. Inconsistencies in Test Execution. No monitoring on test case coverage . Lack of API Test documentation. In some projects, there was no documentation.
  7. Understanding Generative AI • What is Generative AI: • AI

    models that generate human-like content based on patterns in training data. In API testing, it automates test case generation, data creation, and validation. • Key Technologies: • NLP: Enables AI to understand and generate human-readable text. • ML: Learns from data patterns for accurate predictions. • AI adoption approach: •AI Governance. •Identify use cases. •Implementation and tracking outcomes
  8. AI Adoption Approach: 1 2 3 AI Governance Identify Use

    cases Implementation  AI Governance helps the business to make better decisions.  Define policies; Develop SOPs  approval process on AI Investment based on customer feedback and business strategy.  Repeatable tasks – where is major operational saving.  Engineering productivity.  Identify Customer pain points where GenAI can bring value.  Start with small ones and the ones that are expected to be successful.  Build core engineering team.  Timebox implementation approach with API testing baseline  Track efficiency and Cost of avoidance against baseline
  9. AI Governance Council Structure: 1 2 3 Business Strategy AI

    Sponsor Steering Committee Policy Team Process Team Enablement Team Business Leads Risk Management Develop and promote policies to foster the innovation while aligning to Quest Requirements Establish, extend and evaluate processes and tools to ensure effective efficient utilization. Foster awareness and capabilities through training and communication Leads AI use case identification, approvals, deployments and monitoring Access and monitor exposure and risk to the organization Start with customer pain points and Key features and differentiators. Ensures alignment with Quest’s overall strategy. Secure fund and other needed resources. Access and approve AI use cases to ensure conformity to Quest processes, policies and requirements
  10. The Role of Generative AI in API Testing API Product

    Test Case Dynamic Data Generation Proactive Defect Detection API Test Case Execution and Validation • API Test Case Creation: • Generates diverse test cases from API specifications based on API Spec • Functional test, edge cases and other test coverage. • Dynamic Data Generation: • TDM – Generate realistic synthetic Data Generation. • API Test Case Execution and Validation: • Integrates seamlessly into CI/CD pipelines. • Automatically validates API outputs against expected results. • Proactive Defect Detection and Fix: • Predicts failure points using historical data. • Generates preventive test scenarios to avoid recurring issues API Spec → Test Cases & Data → API Production Validation → Testing Insights
  11. Challenges and Considerations: Category Challenges Considerations Context Awareness Limitations Copilot

    generates code based on prompts but lacks full understanding of API context or specifications Provide detailed and accurate prompts, including API documentation links or schemas, to guide Copilot effectively Test Case Quality and Coverage Generated test cases lack complete coverage for complex APIs. Manually review and enhance Copilot- generated test cases to ensure they meet quality and coverage standards. Integration with Existing Testing Frameworks Copilot's code suggestions was perfectly not aligned to other testing tools or frameworks used (e.g. Postman) Adapt Copilot-generated code to fit your existing frameworks or pipelines. High Learning Curve for New engineers Engineers new to Copilot struggled to craft effective prompts. Provide training and best practices for optimizing Copilot's use in API testing. Over-Reliance on AI Suggestions over-rely on Copilot's outputs without validating the logic or suitability for the API context. Emphasize that Use Copilot as an assistant, not a replacement for expertise, and ensure code review for critical tests. Security and Compliance Generated code might inadvertently include insecure patterns or non- compliant practices Regular auditing and review to ensure security and compliance standards are followed.
  12.  30% productivity increase for Quality Engineers.  Reduced testing

    cycle time with automation - Automates repetitive tasks, reducing the testing cycle  Enhanced test coverage for edge cases. - Simulates test cases and real-world scenarios effectively  Improved API testing documentation.  Helping in making the Switch from QA to QE.  Cost Savings - Minimizes resources needed for manual efforts and debugging. Benefits
  13. Key Takeaways  AI Governance is essential with leadership involvement.

     Build security, legal and compliance policy team with AI expertise. – Avoid shadow AI experiments within organization  Don’t over rely on GenAI output - proper validation and review is crucial.  Start with small and impactful use cases to showcase the value.  Embrace a fail-fast approach to learn and adapt quickly. Generative AI is transforming API testing start small, scale strategically, and deliver high-quality API products with confidence!