Upgrade to Pro — share decks privately, control downloads, hide ads and more …

ForwardJS '19 - The Need For Speed: Measuring Latency In Single-Page Applications

ForwardJS '19 - The Need For Speed: Measuring Latency In Single-Page Applications

You've launched your application, and now you've been asked to optimize it. How do you know how fast it is? Is your data accurate and actionable? We'll explore both generic and framework-specific approaches to instrumenting your application, along with real-world examples of how to put these techniques in practice!

Jordan Hawker

January 24, 2019
Tweet

More Decks by Jordan Hawker

Other Decks in Programming

Transcript

  1. Why Do We Care? Poor performance impacts business metrics •

    Bounce Rates • Conversions • Feature Throughput • Revenue Impact • User Satisfaction
  2. Implementation Before Optimization • You don’t know what to optimize

    without data • Backend Instrumentation • Mature tooling already exists • Detailed information about services • Doesn’t provide the full picture • Can we track performance across the whole stack?
  3. Web Application Performance • Rate of Requests • Resource Consumption

    • Efficiency • Latency • Response Time • Throughput • Availability • Responsiveness WHAT DOES THAT EVEN MEAN?
  4. The time it takes for the critical content of a

    page to load such that the user believes they can engage in their primary purpose visiting that page Interactivity
  5. Priority: User Experience • Each metric has its own use,

    but… • Real-world impact is paramount • Know what your users are seeing • Gather data on different devices/browsers • Understand variance across global regions • Great data will reflect improvements across the stack
  6. Evaluating Third-Party Tools • Great overviews of app performance •

    Detailed suggestions for optimization • They don’t understand how SPAs render • Data is a rough approximation at best • How can we leverage framework internals?
  7. Critical Path • Each page has primary activities • Components

    related to that content are “critical” • Secondary components • Also rendered on the page • Not tied to the primary activity • Don’t need to finish rendering
  8. DOM is a tree… • …and so are your apps!

    • Pages are nested layers of components • Each route or component has direct children • Components with no children are leaf nodes • Each parent waits on its children to render
  9. Subscriber-Reporter Relationship • Relies on child components to complete render

    • Subscribes to updates from its children • Complex conditions for interactivity Subscriber Reporter • Reports its own render times • Child of a subscriber • May also be a subscriber itself
  10. Approach • Leverage hooks to approximate rendering • Account for

    async behavior (e.g. requests, images) • Capture per-component events & roll up to page • Tie parents to their children to bubble data • Work up from the bottom of the tree • When child reports, check parent interactivity • Common method to check complex conditions
  11. React • componentDidMount • Invoked immediately after a component is

    mounted • Create a higher-order component (HOC) • Provides components with this latency behavior • Parent/Child Relationship • this.props.children gives access • Pass a parent reference down to children
  12. Ember • Routes: didTransition • Components: didInsertElement • Leverage Runloop

    • run.scheduleOnce(‘afterRender’, this, this.reportInteractive) • Parent/Child Relationship • this.parentView is a direct reference • Ember-Interactivity: jhawk.co/interactivity-demo
  13. Angular • ngAfterContentInit • Called after Angular fully initializes all

    content of a directive • Parent/Child Relationship • this.parentInjector.view.component
  14. Vue • mounted • Called after the instance has been

    mounted • this.$nextTick • Called after all children have been rendered • Parent/Child Relationship • this.$parent
  15. Performance API • Browser interface that provides performance info •

    performance.mark • Creates a timestamp in the performance buffer • performance.measure • Creates a measure between two timestamps • Data can be inspected, tracked, and visualized
  16. What We Learned • Define the critical path • Identify

    bottleneck components • Defer non-critical components • Holistic user-centric metrics reflect improvements across the stack
  17. The Path Forward • Leverage multiple tools for instrumentation •

    Measuring each layer of the stack is useful • Capture real user metrics to understand impact • Virtual machines are insufficient simulations • Use framework-aware tools for granular data • Be cognizant of the performance impact of leveraging framework internals