64 30 51 28 Who does NOT have performance issues? :) • Who does clearly understand and has insights to these issues? • Under which circumstances do they occur? • What is your traffic scenario?
64 30 51 28 Performance Testing • Is business critical (Better Conversion, More $$$) • Is orthogonal in your organisation (Business-, Product-, Test-, Development, Operations-People) • Results should always be available (Reports, Dashboards, Alerts, Insights) • Should be fully automated (Load Generator Setup, Metrics Analysis, Testing Environment, Testing Data)
64 30 51 28 Performance Testing – Prerequisites • Goal? What do you want to learn? • Non-functional requirements? • Performance Budget? • Test and traffic scenario? • Environment to test against?
64 30 51 28 User, Application and System Monitoring • Real User Monitoring (RUM) • How does the client perform for all users? • Application Performance Monitoring • What happens inside my application? • What kind of request takes how long? Why? • What kind of follow-up requests / queries does my application create? • System-Monitoring • What's going on in my internal network? • What's going on in my web server / app server machine? • What’s going on in my database cluster? • What's about storage and IOPS?
64 30 51 28 Mind your organisation 1. Performance testing and analysis is team work! 2. Please involve: product people, developers, QA, operations 3. “Working software over comprehensive documentation” – but make your results transparent to every stage of your organisation!
64 30 51 28 – https://www.w3.org/2010/webperf/ “[...] methods to measure and improve aspects of application performance of user agent features and APIs.” #WebPerf
64 30 51 28 #WebPerf 101 • less requests! (cache stuff) • less requests! (minify, concat) • less requests! (CDN for assets) • image compression / optimization ... • … a lot more … • how long does a tcp handshake take? :) • how much tcp connections does a (certain) browser open in parallel? :)
64 30 51 28 Things you have to take care of • XHR & clients logic (JavaScript): A lot of client logic? Single page application (SPA)? • correct HTTP (error) response codes • Assets = bandwidth => CDN • Content extractions, e.g. CSRF token to submit form data • Double opt in • Test data in general
64 30 51 28 HTTP/1.1 vs. HTTP/2 • Some things change. • Some don’t. • What to focus on first? HTTP/2 Performance Anti-patterns by Ilya Grigorik https://docs.google.com/presentation/d/1_SMrVmiMxW2X1QZ1EcCnLKSosiD0PppP70Q3bw-l5Lg/present
64 30 51 28 Recap: Website 1. #WebPerf FTW 2. Seriously watch your bandwidth 3. Plenty other things to take care of 4. Iterate slowly and communicate with other people in your organization. 5. Never test external partners (tracking, advertisers, etc. etc.)
64 30 51 28 API characteristics • Usually XML or JSON responses • “Sequential” API flow, RESTful API flow, HATEOS / Hypermedia • Authenticated (authentication required or available auth tokens via test data) • No (browser) client, no client behavior – *sigh* :D • Requests from different clients (browser, mobile app, IoT, etc.) should be well-formed and ask for something really really specific • And... there is client logic (filters, paging, hypermedia)
64 30 51 28 Things you have to take care of • Authentication (Basic Auth, SingleSignOn, oAuth, etc.) • HTTP headers (caching, ETags, gzip) • Header and content extractions ('id' to follow, 'auth-token' to use) • Test data in general • Correct HTTP (error) response codes • HTTP (error) codes in response body (response is HTTP200, content says 401 or “authentication required”) • RESTful (HTTP PUT vs. HTTP POST) • Auto follow of hypermedia links means you need a correct and full API specification :) • Rate limiting: Watch HTTP429 / too many requests, watch the fallback • One request per item vs. a request for a filterable batched result • Polling? Every minute? Caching fun. Don't send requests on a specific and fixed time... cron-style • No client handling / bad handling for HTTP5xx errors
64 30 51 28 Advanced API performance testing – SOA & µService • Your environment is really distributed – monitoring and request tracking are must haves • Try to isolate each service and test it (firewall? proxy to get traffic into the data center...) • Try to combine test scenarios of services for a particular user journey and test their contracts
64 30 51 28 Wrap-up – How to do it? 1. Discuss user journey 2. Create minimal viable test case version 3. Test slowly 4. Iterate 5. Communicate 6. “Stale” your scenario 7. Run with more traffic for specific test type 8. Website: Use HAR to help you to get all the requests you need for your website
64 30 51 28 Wrap-up Performance testing website and performance testing APIs Needs to involve people from all needed departments/professions in your organisation.
64 30 51 28 Wrap-up Performance testing website and performance testing APIs Is nothing to get stuck in the “Not Invented Here Syndrome”! – There are a lot of tools out there! https://en.wikipedia.org/wiki/Not_invented_here