Presented on 21st March 2017 for @webdeldn meetup
https://www.eventbrite.co.uk/e/webdeldn-5-visual-regression-testing-tickets-32104099225#
Some great questions were asked, here they are:
Q. How long did it take to amass 1532 tests
A. Huddle began writing PhantomCSS tests (VRT) from the beginning of 2012, though PhantomCSS wasn't open-sourced until November of that year. Huddle have continued to use PhantomCSS ever since.
Q. Do you share VRTs with other developers, if so, how do you share the baseline.
A. We commit the baseline images into the git repo(s) along with the tests and the code they're testing, this works well for us because all the developers at Huddle use Windows machines (not all the same version though), as do our CI build agents. We don't do environment "snapshot" testing, it fully supportive of Huddles Continuous deployment strategy - we sometimes release two of three times a week.
Q. Do you often have flaky tests?
A. Yes, but in proportion to the amount of test we have the number of flaky tests are quite small. Usual reasons for flakyness are caused by assumptions in tests about the order of events/callbacks, but we've also seen reflow issues caused by image and font loading. Flakyness is heavily mitigated by proper SUT isolation.
Q. Do you test responsive layout.
A. Not much, but yes we do. PhantomJS/CasperJS provides methods for changing the viewport.
Q. How do you test mutable states like contextual date/time stamps, e.g. "Just now", "two minutes ago", "a day ago" etc.
A. We use the following library to fake the Date object https://github.com/sinonjs/lolex. (FYI we also use https://github.com/sinonjs/sinon for our XHR fakes)
Q. How long does it take to run all these tests?
A. We run the VRTs along with the functional tests. They're run against four git repos, five test suites. In CI they take less that ten minutes (depending on npm installs) , on local they're a lot quicker - though locally we can't parallelise each repo's test suite.
Q. What about browsers?
A. Huddle don't do any automated browser testing, we believe there is not enough value in doing so. Anecdotally In the five years of not doing
automated browser testing we have never had a critical x-browser regression. Our cross functional teams include QA which may be a factor in the early discovery and fixes of bugs.