Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Fallacy of Fast - wwc

Ines Sombra
December 07, 2016

Fallacy of Fast - wwc

Given at Women who Code Sydney. Materials & references live in https://github.com/Randommood/FallacyOfFast

Ines Sombra

December 07, 2016
Tweet

More Decks by Ines Sombra

Other Decks in Technology

Transcript

  1. De-prioritizing Testing Cutting corners on testing carries a hidden cost

    Test the full system: client, code, & provisioning code Code reviews != tests. Have both Continuous Integration (CI) is critical to velocity, quality, & transparency
  2. De-prioritizing Releases Release stability is tied to system stability Iron

    out your deploy process! Dependencies on other systems make this even more important Canary testing, dark launches, feature flags, etc are good @Randommood
  3. Automation shortcuts taken while in a rush will come back

    to haunt you Runbooks are a must have Localhost is the devil Sloppy operational work is the mother of all evils De-prioritizing Ops @Randommood
  4. “Future you monitoring” is bad, make it part of MVP

    Alert fatigue has a high cost, don’t let it get that far Link alerts to runbooks Routinely test your escalation paths De-prioritizing Insight ✨ @Randommood
  5. The inner workings of data components matter. Learn about them

    System boundaries ought to be made explicit Deprecate your go-to person De-prioritizing Knowledge @Randommood
  6. The internet is an awful place Expect DoS/DDoS Think about

    your system, its connections, and their dependencies Having the ability to turn off features/clients helps De-prioritizing Security
  7. Service ownership implies leveling-up operationally Architectural choices made in a

    rush can have a long shelf life Don’t sacrifice tests. 
 Test the FULL system What we learned ✨ @Randommood
  8. Mind system Design Simple & utilitarian design takes you a

    long way Use well understood components NIH is a double edged sword Use feature flags & on/off switches (test them!) @Randommood
  9. Alice’s Testing Areas Correctness Error Performance Robustness Good output from

    good inputs Reasonable reaction to incorrect input Time to Task (TTT) for Behavior after Goal Single node Multi node Clustered Cache enabled Given # of input/outputs Given uptime @Randommood
  10. a Testing Harness Is a fantastic thing to have Invest

    in QA automation engineers Adding support for regressions & domain- specific testing pays off @Randommood
  11. Mind system Limits Rate limit your API calls especially if

    they are public or expensive to run Instrument / add metrics to track them Rank your services & data (what can you drop?) Capacity analysis is not dead ✨
  12. Mind system Growth Watch out for initial over- architecting “The

    application that takes you to 100k users is not the same one that takes you to 1M, and so on…” @netik Keep changes small! @Randommood
  13. Mind system Configs System assumptions are dangerous, make them explicit

    Standardize system configuration (data bags, config file, etc) Hardcoding is the devil
  14. Mind Resources Redundancies (of resources, execution paths, checks, data, messages,

    etc) build resilience Mechanisms to guard system resources are good to have Your system is also tied to the resources of its dependencies
  15. Distrust is healthy Distrust client behavior, even if they are

    internal Decisions have an expiration date. Periodically re- evaluate them as past 
 you was much dumber A revisionist culture produces better systems ✨ @Randommood
  16. Keep track of your technical debt & repay it regularly

    It’s about lowering the risk of change with tools & culture Mind assumptions What we learned ✨ @Randommood
  17. TL;DR Things that are easy to neglect may be harder

    to correct later Think in terms of tradeoffs TESTING MATTERS! Not all process is evil Keep in Mind