Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Test Automation and Isolations

Test Automation and Isolations

- What's pipeline driven development in DevOps
- Three levels of Test Isolation:

1. Isolating test cases from themselves (repeatable)
2. Isolating test cases from each other
3. Isolating the System under test

- One major concept: TestContext with DefaultObject
- Tools and frameworks: Cypress.io, TestContainers, PACT and Service Virtulization tools.

Bryan Liu

June 11, 2020
Tweet

More Decks by Bryan Liu

Other Decks in Programming

Transcript

  1. In the past of Chrome development, they hire lots of

    manual testers, once developer owns the responsibility (with self-service testing tools [including production like env.]), they found developer write less bugs thus needed fewer testers. MS: - 2010 ~ 2014: QA SDET from 10k -> 0 - Mostly moved into dev team - (x) Windows 600 branches (2015), 400 branches (2017) Facebook: - Code review occupies a central position - No separate group of tester ir QA people - Relies on automated testing - Developer responsible for writing unit tests, regression tests and performance tests https://dzone.com/articles/how-facebook-develops-and (2013) http://www.zdnet.com/article/why-facebook-doesnt-have-or-need-testers/ * year 2007 ~ 2012 - The whole company and vendor are their testers - Definition of "high-quality" software: different from company culture (fun place to work) or what category of production which is made (keep coming back until it works again) - Facebook's product is a website, so it can fix things quickly. It has a process which permits rapid deployment of new code, and rapid rollback of buggy changes. This reduces the cost of recovering from bugs. - Facebook Platform has gotten better Google: - Don't hire too many dedicated testers - Developer relies on them, lazier and write more bugs - Google suggests to do testing on different levels: unit, integration (integration between components), UI testing, functional UI testing, E2E testing. Google tries to grow a generation of developers who know how to test their code on different levels preferably with test automation. JD of SDET @Spotify: We are looking for an experienced Software Development Engineer in Test at Spotify to enable and encourage developers to design and implement tests in an efficient way by providing them with the right tools, frameworks, and infrastructure. Spotify: - Hire experience QA to educate how to auto test - Spotify culture & release train Fireside Chat: DevOps at Amazon with Ken Exner, GM of AWS Developer Tools - AWS Online Tech Talks https://www.youtube.com/watch?v=FlZm3nFMIAM&feature=youtu.be Amazon: - Architect, security engineer, tester are to teach and enable developer to think like them, not perform tasks for them Status & Trends of Test Automation in DevOps
  2. Test Stability 1% failure rate with 200 test scripts =

    86.6% chance of failure ~ Linkedin The build team has a responsibility to maintain a 99% reliability SLA. We? ve noticed that when a build has <1% chance of failing, developer trust in the builds increases. Failures above the 1% level are very visible and frustrating? ? ? even in a group of a couple dozen developers. ~ Square Tips: - Always interact with the SUT like a user would (user journeys) - Acceptance tests own their data - Treat acceptance tests like production code - Collective ownership of acceptance tests Pipeline Driven Development
  3. Iso-I Iso-II testContext Intentions Iso-III describe('Given all products have sufficient

    inventory', () => { context('When user purchase ONE item', () => { beforeEach(function(){ /* * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! * ! re-seed DB to restore testing data each time! * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! */ cy.exec('node ../seed/product-seeder') // login before each test cy.loginByForm('[email protected]', 'test1234') cy.server() cy.route('GET', '/prod').as('prodPage') }) it('Then he can checkout purchased item successfully', function(){ // buy a game cy.visit('/') cy.contains('div', 'Inversion').find('a.cart').click() // checkout cy.checkout('QA5') // wait process & do assertion cy.wait('@prodPage') cy.get('div#success').contains('Successfully bought') cy.contains('div', 'Inversion').find('div.stock').should('contain', 'Stock: 4') }) }) }) describe('Given all products have sufficient inventory', () => { context('When user purchase ONE item', () => { beforeEach(function(){ // login before each test cy.loginByForm('[email protected]', 'test1234') cy.server() cy.route('GET', '/prod').as('prodPage') }) it('Then he can checkout purchased item successfully', function(){ // buy a game cy.visit('/') cy.contains('div', 'Inversion').find('a.cart').click() // checkout cy.checkout('QA5') // wait process & do assertion cy.wait('@prodPage') cy.get('div#success').contains('Successfully bought') cy.contains('div', 'Inversion').find('div.stock').should('contain', 'Stock: 4') }) /* * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! * ! run the same spec multiple times will fail * ! -> not isolated from itself * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! */ }) }) Isolation from itself: - Each test case is repeatable (run multiple times) - Run multiple times against the same env - Easy debuging, don't need to cleanup/restore manually describe('Given all products have sufficient inventory', () => { context('When user purchase ONE item', () => { beforeEach(function () { /* * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! * ! re-seed DB to restore testing data each time! * !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! */ cy.exec('node ../seed/product-seeder') // login before each test cy.loginByForm('[email protected]', 'test1234') cy.server() cy.route('GET', '/prod').as('prodPage') }) it('can checkout one purchase successfully', function () { // buy a game cy.visit('/') cy.contains('div', 'Inversion').find('a.cart').click() // checkout cy.checkout('QA5') // wait process & do assertion cy.wait('@prodPage') cy.get('div#success').contains('Successfully bought') cy.contains('div', 'Inversion').find('div.stock').should('contain', 'Stock: 4') }) }) }) describe('Given all products have sufficient inventory', () => { context('When user purchase TWO item', () => { beforeEach(function () { cy.exec('node ../seed/product-seeder') // login before each test cy.loginByForm('[email protected]', 'test1234') ... .... }) it('can checkout all items successfully', function () { // buy a game cy.visit('/') // buy one cy.contains('div', 'Inversion').find('a.cart').click() // buy 2nd one cy.contains('div', 'Borderlands').find('a.cart').click() cy.visit('/shopping-cart') // checkout cy.checkout('QA5') // wait process & do assertion cy.wait('@prodPage') cy.get('div#success').contains('Successfully bought') cy.contains('div', 'Inversion').find('div.stock').should(($div) => { const text = $div.text() expect(text).to.eq('Stock: 4') }) cy.contains('div', 'Borderlands').find('div.stock').should(($div) => { const text = $div.text() expect(text).to.eq('Stock: 4') }) }) }) }) Isolation from others: - NO dependency between tests and their results - NOT depends on the running order - Test against the same entity might impact each other - Reset database seems to be a solution ??? - Reset data is destructive - Prohibit parallel execution - Provision multiple set of environment? - cost $$ - need to merge test result describe('Given all products have sufficient inventory', () => { context('When user purchase TWO item', () => { const firstProduct = new Product('Test Product NO1') const secondProduct = new Product('Test Product NO2') let testContext = new TestContext() testContext.products.push(firstProduct, secondProduct) before(function () { testContext.products.forEach((prod) => { cy.task('createDocument', {collectionName: 'products', filter: prod}) }) }) after(function () { // since create diff product each run, you can leave it or not, it depends~ cy.task('deleteDocuments', {collectionName: 'products', filter: {title: {$regex: '^Test Product'}}}) }) it('can checkout all items successfully', function () { cy.visit('/') cy.contains('div', testContext.products[0].title).find('a.cart').click() // buy 2nd one cy.contains('div', testContext.products[1].title).find('a.cart').click() // checkout cy.checkout(testContext.userName) cy.get('div#success').contains('Successfully bought') cy.contains('div', testContext.products[0].title).find('div.stock').should(($div) => { const text = $div.text() expect(text).to.eq('Stock: 49') }) ... ... }) }) }) import ShortUniqueId from 'short-unique-id' class Product { constructor(name) { this.title = `${name}_${new ShortUniqueId().randomUUID(6)}`, //<= alias name this.description = `Test description for ${this.title} product! rlVyLUWQr9EU2oO3SZKe-ihpfDrsokUb8nwVmeUU7-oS2S9kzBGw' this.imagePath = 'images/test_the_test.png', this.price = 22, //<== assign default values this.stock = 50 //<== assign default values } } class TestContext { constructor() { this.products = [], //<== assign default values // for user this.userName = 'defaultUser', //<== assign default values this.userEmail = '[email protected]', //<== assign default values this.userPassword = 'test1234' //<== assign default values } } TestContext: - Find 'functional entities', for every test-case - Shopping -> create new product - Github -> create new account & repo - Event -> crete new campaign - Give alias name for the func entity - TC create/teardown it's own testing data - Easy parallel execution context('Test purchasing when short of inventory', () => { let testContext = new TestContext() let outOfStock = new Product('No Enough Inventory') outOfStock.stock = 0 // <== clearly shows the purpose that you want to TEST! testContext.products.push(outOfStock) testContext.userName = 'ValidUser' // <== shows the important variation of this test case! testContext.userPassword = 'GoodPassword' ... ... it('add to cart should shows error message', function(){ // buy a game cy.visit('/') cy.contains('div', testContext.products[0].title).find('a.cart').click() // checkout // cy.checkout(testContext.userName) // wait process & do assertion cy.wait('@prodPage') cy.get('div#success').contains('short of inventory') }) }) TestContext with Default Object: - Clearly show test intentions! - Test case shows intention is important! - Team will result in similar coding pattern / structure context('Stub Response Data', () => { it('cy.route() - route responses to matching requests', () => { // https://on.cypress.io/route cy.server() cy.fixture('example.json').as('fakeResp') // Stub a response to GET /prod cy.route({ method: 'GET', url: '/prod', response: '@fakeResp' }).as('getComment') cy.visit('http://localhost:3000') cy.wait('@getComment') // UI displayed correctly according api response cy.get('div.price').should('have.length', 6) // linkage attributes are correctly set cy.get('a.cart').first().invoke('attr', 'href') .should('contain', '5c4a83c471d09c3125654816') }) }) account: ? Johnny? ==> ? Johnny_4534031? book: ? DevOps 101? ==> ? DevOps 101_1234567? BRYAN LIU | June 25, 2020 Test Automation Test Isolation and TestContext Goals of Test Automation Git Repo: https://git.linecorp.com/TW-QA/test-isolation 3 Levels of test isolation 1. Isolating test cases from themselves (repeatable) 2. Isolating test cases from each other 3. Isolating the System under test Run 20 times before asking PR merge - (*) Stability 1st - 20 times in a row Run Smoke / AC before manual regression (all env) - Don't waste time on testing unmature delivery Reduce manual regression efforts - Reduce # of test case automation - Focus on major functionalities and ROI (confidence) - Discuss automation needed for defect & bug found - Build tools to seepd up manual testing - (*) Single source of test cases - (*) Automated coverage rate of AC / high priority test cases Daily / weekly run for other integration regression - Save time on failure build checking - Or just complete this regression before release Run AC in each commit (higher goal) - Run smoke test before full acceptance tests More process & monitoring automation - Deployment pipeline - Flacky analysis - Performance / stress in delivery pipeline All Automated Tests AC Smoke Tests All Test Cases Per Commit Daily/Weekly Per Release Frequency My Service (SUT) Event Bus Test Suite Assertion Assertion output Stub 3rd Pty Service Tests send prepared input msgs of System A & B System A System B Insert test data Stub / Mock / Fake BFF / Express Vue App Mock (Talkback JS) X X Browser My Application http://abc.xzy.com iFrame iFrame cy.route() Cypress (Tests) Application X Database Full Set Database . . . In browser mock/stub, ex: Polly.js Cypress.io Service Virtualization Tools (corss-platform): Ruby: VCR GO: Hoverfly JS: Talkback, mountebank Java: WireMock, betamax, Moco TestConteners, ex: DB: MySQL, Mongo Message Queue: Kafka Test Runtime: browser, Selenium System / Component Level Test Isolations - TestContainers 3 2 1 1 3 2 https://www.testcontainers.org/
  4. Observability is about intersecting multiple items to generate a deep

    understanding of the true health, the real issues, and what should be changed to improve your environment. - Monitoring - Central Logging - Tracing - Analytics API Test Automation for Monitoring =\= Observability Why NOT use automated test to verify API: - Contract is between consumer & provider, now you have to store it also in test repo! Should RD notfify you when api get changed? - Test can only be executed in an integrated environment (beta, rc ...) - But contract testing can be done in pre-commit phase (bfr PR merge) - Dev & SDET work together to complement coverage Why Automated API Test?? - Check req/resp format, schema? - Health check? - Verify calculation result from provider? (functional test) Jenkins API Test Job Given "there is no user called Mary" When "creating a user with username Mary" POST /users { "username": "mary", email: "...", ... } Then Assert Response is {"data": {"id": "xyz01", "username":"mary", ...}} HTTP Consumer (frontend) Provider (backend) API Layer POST /users { "username": "", email: "...", ... } resp: {"data": {"id":"xyz01", "username": "", email: "...", ...}} HTTP Need Automated API Testing? Code Duplicated Contracts publish fetch provider Spec API End Point Biz Logic DB Expected request Actual response Service Consumer Project Service Ptovider Project Client Spec Http Client Pact Stub Server Actual request Minimal response Too many mocks, No confidence? (1) Add e2e tests (2) Contract testing "Contract testing" v.s. API monitor: - CT run @ unit-test phase, get feedback earlier without integrated env - CT is shift-left testing, make beta env more stable - Changes are verified with backwards compatible - API monitor might be failed due to other runtime problems (network, data, down-stream service) API Integration - Contract Testing Tips: - Not for 3rd-party public API - Focus on consumer user scenarios - API is about integration / interaction, NOT request/response check - Contract testing focus good & handled error paths (context and format) - Contract testing is NOT doing functional testing (that's in provider's tests) - Microservices -> each component has its own release cycle -> check backward compatable * Pact follows Postel's law: Be conservative in what you send (consumer) & Be liberal in what you accept (provider) * Best Practice: Contract Tests vs Functional Tests Backward Compatibility Check https://docs.pact.io/ Matching: - Match based on type - March based on arrays - Match by regular expression - Match common formats (ex: table below) Reference: Pact Framework https://docs.pact.io Pact-js https://github.com/pact-foundation/pact-js JS sample/tutorial: https://github.com/pact-foundation/pact-js#tutorial-60-minutes A Comprehensive Guide to Contract Testing APIs in a Service Oriented Architecture https://medium.com/@liran.tal/a-comprehensive-guide-to-contract-testing-apis-in-a-service-oriented-architecture-5695ccf9ac5a PACT, between mobile - BFF https://medium.com/@DoorDash/contract-testing-with-pact-7cf108ced8c4 7 Reasons to Choose Consumer-Driven Contract Tests Over End-to-End Tests https://reflectoring.io/7-reasons-for-consumer-driven-contracts/ Pact Verifier to replace consumer unit-test?? https://github.com/morvader/ContractTesting_Pact/blob/master/api/test/apiPactSeveralClients.spec.js ? ? PACT framework? ? ? contract testing? ? ? ? ? ? ? ? ? - CDCT ? ? ? ? ? ? ? ? consumer? ? ? ? ? ? ? ? ? content & interaction? ? ? ? ? ? domain? ? ? ? ? ? ? ? ? contract file? ? ? ? ? mock server? ? ? ? ? ? ? ? ? ? - CDCT? ? ? ? PACT framework? ? ? ? ? ? ? ? ? Interaction? ? ? ? ? client? http invocation? ? ? ? , ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? PACT framework? ? ? ? ? ? ? ? ex: generate & publish contract file, create mock server & run tests for both consumer & provider sides? implement JSON body comparison (rules and code are quite complicated) - CDCT? ? ? ? ? ? ? ? ? ? ? ? ? ? API? ? ? ? ? ? ? unit test? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? beta? ? ? ? ? ? Setup Pact Broker Server (10.231.199.42) $ docker run --name pactbroker-db -e POSTGRES_PASSWORD=line1234 -e POSTGRES_USER=admin -e PGDATA=/var/lib/postgresql/data/pgdata -v /var/lib/postgresql/data:/var/lib/postgresql/data -d postgres $ docker run -it --link pactbroker-db:postgres --rm postgres sh -c 'exec psql -h "$POSTGRES_PORT_5432_TCP_ADDR" -p "$POSTGRES_PORT_5432_TCP_PORT" -U admin' CREATE USER pactbrokeruser WITH PASSWORD ? line1234? ; CREATE DATABASE pactbroker WITH OWNER pactbrokeruser; GRANT ALL PRIVILEGES ON DATABASE pactbroker TO pactbrokeruser; $ docker run --name pactbroker --link pactbroker-db:postgres -e PACT_BROKER_DATABASE_USERNAME=pactbrokeruser -e PACT_BROKER_DATABASE_PASSWORD=line1234 -e PACT_BROKER_DATABASE_HOST=postgres -e PACT_BROKER_DATABASE_NAME=pactbroker -d -p 80:80 dius/pact-broker [irteamsu@isolate001-twqa-jp2v-dev ~]$ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 43d4c1c1a440 dius/pact-broker "/sbin/my_init" About a minute ago Up About a minute 0.0.0.0:80->80/tcp, 443/tcp pactbroker 463d4a9cad70 postgres "docker-entrypoint.s? " 5 minutes ago Up 5 minutes 5432/tcp pactbroker-db "Productivity is not everything, but in the long run, it is almost everything." ~ Paul Krugman What kills our productivity? - Clarity ? ? ? ? (roles, audits, processes) - Accountability ? ? ? ? (someone to blam) - Measurement ? ? ? ? (on personal / team?) => API ? ? ? ? ? ? ? ? ? AE? ? ? API's req/resp/responsiveness? ? What should we do? - get rid of rigid spec, details verification - get rid of code freeze and staging environment - embrace fault tolerance, fuzzy (require collaboration), flexibility - delivery fast, recover fast - launch darkly, canary release - fast feedback - test / monitor on production Productivity Is Everything Service-level testing is about testing the services of an application separately from its user interface. Succeeding with Agile. Mike Cohn 2009 Test as the consumer, not as the author. (Test contract, behaviors, not implementation details).
  5. Demo (workshop): - TestContainers --> increase test coverage! - Talkback

    (mock proxy server) / wiremock - PACT - Test isolation & Parallel Testing Isolation works on multiple levels - Isolating test cases from each other (not depend or be affected by the results of other tests) - Isolating test cases from themselves (repeatable - run twice should have the same result) - Isolating the System under test (use stub /mock to replace external services) Component/Service Level Acceptance Test - It's easier to model component behaviors and verify them with docker and stub. - Automated tests against the component level, which might provides better C/P ratios (as for confidence). ? Not enough to have acceptance test and acceptance test got to be created and maintained by developer will make code more testable and maintainable. ? ? Jez Humble, Pipeline Conf. 2016 For AT to run against a production-like env... Problems: - ? ? provision? ? ? ? ? ? ? ? ? ? (Rancher, K8S) - ? ? ? ? ? component? service? ? ? ? ? ? ? ? - ? ? ? ? ? ? ? B: 2.0.1. B: 2.1 or B:2.2 ?? Solutions: - Trunk based development, toggle on/off - Use Stub / Mock or Contract testing, ... ? > ? ? ? ? ? ? ? ? UT&AT, ? ? ? ? ? (? ? )? ? commit ? master? => ? ? ? ? ? ? ? ? ? ? ? ? ? unstable pipeline (Anti-pattern of Devops) [Book: Accelerate] Automated unit and acceptance tests should be run against every commit to version control to give developers fast feedback on their changes. [DevOps Handbook] These typically test the application as a whole to provide assurance that a higher level of functionality operates as designed (e.g., the business acceptance criteria for a user story, the correctness of an API) . The objective of acceptance tests is to prove that our application does what the customer meant it to, not that it works the way its programmers think it should. Any build that passes our acceptance tests is then typically made available for manual testing. Build UT AT Deploy Continuous Delivery Pipeline Build UT AT Deploy Continuous Delivery Pipeline Build UT AT Deploy Continuous Delivery Pipeline C-A A: 1.1 C-B B: 2.0 C-C What if we DON'T do AT in build pipeline? Problems: - Last minutes integration -> bad quality - More functions needs more regression time - QA becomes bottleneck - Need code freeze => long lived branch => Makes NO sense for MicroServices -> release independently => Impossible to achieve high frequency release C: 3.0 A: 1.1 B: 2.0 C: 3.0 production: A: 1.0 B: 2.0 C: 3.0 Git Pull Request Tips: (In DevOps or high frequency delivery) - QA performs exploratory testing - Regression check should be done by automation - Shift left testing - Verify AC (mainly good paths) for each commit - Results in more stable integration tests run against beta env Now: Add 'AC' stage in build pipeline, even it only has ONE case! Long term: - provision full set + TBD C-A A: 1.1 C-B B: 2.0 C-C C: 3.0 Git Pull Request Build UT CD Pipeline Build UT CD Pipeline Build UT CD Pipeline production: A: 1.0 B: 2.0 C: 3.0 E2E Deploy Regression Pipeline A: 1.1 B: 2.0.1 C: 3.1 Other Exploratory Testing Unit Tests Acceptance Tests (run each commit) Integration Tests (run daily) acceptance and regression Manually executed automatically executed find gaps tests not practical to automate Consumer Driven Contract Testing Purpose: - Communication & design first - Run integration @ unit test phase - Verify API backward compatibility - Up-to-date API document - Error Path - 404 (return null, or raise an error?) - 500 (handles & logs that error message) - 401/403 if there is authorisation Pack broker: http://10.231.199.42/ Too many E2E increases running time and hard to identifiy root cases => replace with API & contract tests Spotify: (without contract testing) - MicroServices archi. - Use TestContainers - Some service even have no unit test Start as a monolith application, following test pyramid strategy! While system growing, inconvinience of creating many mocks for just unit tests => increase UI & E2E automation Reduce more E2E and only keep few 'smoke tests' with more 'shift-left' testing (more contract & API integration tests) Automated Acceptance Test - Why is automated acceptane test so hard? - Tips for now and long-term - Test isolation workshop