Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Performance on a Budget

Performance on a Budget

Austin Code Camp 2012 talk

Avatar for dimitry

dimitry

June 09, 2012
Tweet

More Decks by dimitry

Other Decks in Programming

Transcript

  1. What is Performance? 3 parts of performance l  Speed l 

    How quickly application responds to a user l  Scalability l  How application handles expected users and beyond l  Stability l  How stable the application is under load §  Could be expected or unexpected
  2. 400ms slower leads to 9% drop in traffic Just one

    1 sec delay can cause 7% loss in customer conversion Dropped latency from 7 sec to 2; revenue went up 12%; page views jumped 25% Edmunds.com re-engineered its inside.com site to reduce load times from nine seconds to 1.4 seconds, ad revenue increased three percent, and page views-per-session went up 17 percent Slowdown Of 1 Second Costs $1.6 Billion Performance second to only security in terms of user expectations
  3. How to Measure Performance •  Live Single user execution • 

    How quickly the application responds throughout the full application call •  Automated unit and integration testing •  Multi-user performance testing •  Load testing •  Stress testing •  Peak testing •  Duration testing •  Failover testing
  4. Improving Performance 80-90% of the end-user response time is spent

    on the frontend. Start there - Steve Souders
  5. Front End – Asset Size Compress assets •  GZIP (all

    but PNG) •  http://zoompf.com/blog/2012/02/lose-the-wait- httpcompression Minify JS & CSS Don’t overload asset contents
  6. Front End – Client Rendering Excess DOM Understand JS events

    DOM manipulation Fonts Avoid repaints and reflows Cache JS References: •  http://perfectionkills.com/profiling-css-for-fun-and- profit-optimization-notes/ •  http://blog.monitor.us/2012/03/27-website- performance-javascript/
  7. Front End – Progressive Enhancement Chunked encoding •  ASP.NET doesn’t

    chunk by default •  If you turn it on and writeln to response, each write will get chunked (big perf hit for large HTML) AJAX Defer JS CSS on top JS on the bottom JS load asynchronously
  8. Latency Use CDNs Caching •  35% reduction in bandwidth Combine

    JS and CSS files Load JS asynchronously Sprites Inline images Prefetch and cache assets for future use
  9. Tools for Frontend Measurement Grading Tools Profiling Timing PageSpeed -can

    be a plugin -can be a CDN SpeedTracer -Chrome add-on Blaze.io -mobile timing Yslow! -add-on -runs locally Web developer tools (Chrome/IE/ Firefox) Webpagetest -can select location and enter URL -mobile section uses Blaze.IO Firebug PCAP Web Performance Analyzer -uses hars/pcaps to analyze webpages Loads.in -tests how long page loads from various locations
  10. Tools for Performance analysis Analytics Network Analyzer APIs Google analytics

    Fiddler Navigation timing (( http://www.w3.org/TR/ navigation-timing/) -shows all parts of network in page load -doesn’t split by resource (future) Statsd -nodejs library that collects stats Wireshark (not free) Boomerang.js -measures network traffic Tcpdump -logs tcp calls to a url cUrl -web crawler Dig
  11. Tools for Content Delivery JSON deliver CDN Jdrop -Store JSON

    data in the cloud CloudFlare CloudFront PageSpeed
  12. Tools for Testing Performance Testing Code analysis Blitz.io -ability to

    run up to 250 users in 60 seconds for free. -can be automated (use in continuous integration) Benchmark.js -framework for measuring method response times Browsermob-proxy -captures har data from tests during functional tests Css-stress -profiles css stylesheets
  13. Tools for Mobile Optimizations Images Font src.sencha.io -resizes the image

    to fit the physical screen (mobile) FontSquirrel -generates font that’s best for your device Imagealpha -converts 24-bit PNG to 8-bit PNG (mobile)
  14. Backend - SOA •  Rely on service oriented architecture • 

    Separate your data •  Transactional vs. reporting •  Separate I/O and CPU bound processes on different machines •  Utilize event sourcing patterns •  Concurrent operations •  Be careful! •  Eventual Consistency
  15. Backend - Cache •  Cache as much as you can

    •  But not too much! •  Use the right caching tool •  Understand different caching patterns •  Primed Cache •  Demand Cache •  Take advantage of ASP.NET Caching
  16. Backend •  BIGGEST PROBLEM à CONTEXT SWITCHING •  Measure GC

    pauses •  Optimize worker threads •  Thread deadlocks •  Be careful of Large object fragmentation •  Be careful of Object Relational Mappings •  Don’t rely on Exceptions for logic •  Utilize Connection Pooling •  Threads •  DB •  Utilize Batch requests and responses •  Understand operation impact on .NET Performance •  Take advantage of memory utilization
  17. Tools for Profiling Web profiling Instrumented profiling Glimpse ANTS Performance

    Profiler ($499) Mini-profiler JetBrains dotTrace ($399) -can integrate with ReSharper dynaTrace (varies) -can be integrated with CI -can do monitoring/profiling/diagnostics and much more -most expensive VS2010 Profiler -offered in Visual Studio
  18. Tools for Profiling Heap profiler Benchmarking PerfView MeasureIt PerfMon l Comparison

    for APM and BTM tools ( http://www.scribd.com/doc/53541961/Competitive- Analysis-Application-Performance-Management-and- Business) l Goes over a set of tools (none are free and most are enterprise) for monitoring and transaction tracing
  19. Best Practices - SDLC Testing Performance testing Development Profiling Prototyping

    Continuous Integration Architecture Document performance metrics Document SLAs Requirements Establish performance goals
  20. Best Practices – What to test •  Critical business transactions

    •  Frequently used transactions •  Performance-intensive transactions
  21. Best Practices - notes •  DO NOT prematurely optimize • 

    DO start with 3rd party tools but roll your own solutions if necessary for performance •  DO NOT be afraid to modify standard solutions •  DO go for the first bottleneck and always retest after •  Follow “test à fix à retest” pattern