Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Mobile testing at Marktplaats

Mobile testing at Marktplaats

Explanation of our mobile native app development and testing process - including our automation with Calabash

Susan van de Ven

July 14, 2015
Tweet

More Decks by Susan van de Ven

Other Decks in Technology

Transcript

  1. CROSS-FUNCTIONAL TEAM Android Dev Android Dev iOS Dev iOS Dev

    iOS Dev Tester Product Owner Product Owner IOS Backend Dev UX/Design UX/Design Tester
  2. WHAT TO RELEASE Small releases is the goal BUT Aim

    for each release should have a story the users want to see. Something we can tell them to give them a reason to download. In practice, this usually means a release includes • 1-2 stories from business • 1 tech backlog story • 1 story for users (feedback from app store reviews)
  3. FREQUENT RELEASES: GOOD OR BAD? Use  these  guidelines  when  planning

     updates  to  your  iOS  app:   -­‐  Low  frequency  updates  -­‐  new  features   -­‐  Apps  with  new  features  should  be  submi:ed  on  a  periodic,  monthly  basis.        A  high  frequency  of  new  feature  updates  suggests  poor  development  planning        and  can  be  confusing  to  your  customers.'  
  4. RESULTS Year       Releases        

      2011              3                     2012          5     2013        22   2014                            23  
  5. RELEASE STRATEGY Usually we release to Android first (ability to

    roll fix quickly) Android - gradually rollout release 1% -> 5% -> 20% -> 50% -> 100% of users IOS - Hockey App Beta Release (after green tests) TestFlight while waiting for App Store approval
  6. • Manually accept new stories on devices (range of os,

    sizes, have other apps running on devices) • Write automated tests to cover new functionality • Unit tests • UI tests for important flows what would you hotfix if its broken? FUNCTIONAL TESTS
  7. • Performance testing of API • Accessibility (checking app in

    VoiceOver mode) • Google Analytics - Is tracking good enough so we can tell if feature is success/failure • A/B Testing. Two different designs live to see which performs better • Will it break any other platform or older versions of apps? OTHER TESTS
  8. BEHAVIOR DRIVEN DEVELOPMENT Feature: Keep tests organized by Feature in

    a human readable language Scenario: Title of test shown on test reports Given: any Pre Requisite like navigation to correct screen or creating test data, logging in correct user When: the action under test is performed Then: the assertion to check if expected behaviour happened
  9. EXAMPLE SCENARIO Feature: Search Scenario: Search on title and description

    Given the user searches for a term that is in the description of an ad When they refine the search to include title and description Then the search results includes ads where the search term is in the description
  10. EXAMPLE STEP DEFINITION Given(/^the user searches for a term that

    is in the description of an ad$/) do @search_term = Time.now.to_i.to_s description = "search by description test #{@search_term}" @ad = Ad.new("description test title") @ad.description = description @ad.place_ad Search.new.new_search(@search_term) end
  11. PERFECT TOOL? • open source • same tests run on

    multiple platforms • tests can be written by QA and developers • runs on emulator and device • supports webviews • BDD • JVM based (can reuse desktop test libs)
  12. CALABASH • open source • same tests run on multiple

    platforms • tests can be written by QA and developers • runs on emulator and device • supports webviews • BDD • JVM based (can reuse desktop test libs)
  13. QUERY LANGUAGE N.B on iOS the accessibility label can be

    used as an identifier. i.e. “Prijs” Remember that this is read to visually impaired users when VoiceOver is on so should be a functional explanation of the element in the users own language
  14. ANDROID Jenkins CI App code is pushed App is built

    and unit/Robotium tests are run On Green - All calabash tests run in parallel on local devices and a smoketest is run remotely on device cloud
  15. iOS Jenkins CI App code is pushed App is built

    and unit tests are run on VMs HockeyApp Alpha The calabash smoketests run on Device Cloud for fast feedback Nightly all calabash tests are run on device cloud After all tests green on iOS 7 & 8 iphone/ipad portait/landscape HockeyApp Beta is created
  16. PROBLEMS Mobile Test Frameworks are still in infancy - slow

    and buggy but getting better. iOS Simulators have lots of bugs - every new Xcode release brings new problems. iOS local device cloud - our attempts unsuccessful so far
  17. EXTERNAL DEVICE CLOUD For checking design on different screen sizes

    And for regression (easier than local iOS device cloud)
  18. SECRET TO OUR GREEN BUILD Each jenkins job retries the

    test run 3 times Each scenario - only retry if test previously failed on that device Run 1 Run 2
  19. SPEEDING UP TESTS Make use of Navigation shortcuts like deep

    linking Place ad -> wait for index -> start search -> type ad title into search box -> wait for results -> select ad from list -> view ad OR…… Place ad -> open deep link -> view ad Keep same user logged in as much as possible
  20. STILL TO IMPROVE Testing App updates Memory/CPU monitoring Memory Leaks

    iOS local device cloud Monitoring A/B Testing Test speed/stability
  21. ?