Dimensions of testing • Which type of code a test touches • Which boundaries of the system it touches • Which real flows a test exercises • Which level of isolation are in place (if any) • Etc
Unit tests • Well-defined single entry point • Well-defined single exit point • Usually a small subject • Exercise abstractions you own • Not necessary 1:1 relationship to classes / functions • Never touches infrastructure ⚠
Integration tests • Exercise entry points and/or exit points beyond your application boundaries • Always enforce some degree of isolation (ideally fully isolated) • Categorized according the number of integrations under test • Categorized according user-value under test
Integration Level Rationale Single Exercises only one well-defined system boundary Multiple Exercises several well-defined system boundaries Component Exercises one well-defined component (user perspective) Acceptance Exercises several components with real user flows
E2E tests • Black-box instrumentation over a deliverable artefact • Exercise user flows over a pre-production environment • The most expensive ones to write + maintain
Integrated tests that bring value Comprehenstive tests over data sources, exercising thirdy-party transformation / serialization engines alongside with your own error handlers and/or data transformers
class FactsDataSourceTests { @get:Rule val restInfrastructure = RestInfrastructureRule() private lateinit var dataSource: FactsDataSource @Before fun `before each test`() { val api = restInfrastructure.server.wireRestApi() dataSource = FactsDataSource(api) } @Test fun `should handle no results properly`() { restInfrastructure.restScenario( status = 200, response = loadFile("200_search_no_results.json") ) val noFacts = emptyList() assertThat(simpleSearch()).isEqualTo(noFacts) } https://github.com/dotanuki-labs/norris
class FactsDataSourceTests { @get:Rule val restInfrastructure = RestInfrastructureRule() private lateinit var dataSource: FactsDataSource @Before fun `before each test`() { val api = restInfrastructure.server.wireRestApi() dataSource = FactsDataSource(api) } @Test fun `should handle no results properly`() { restInfrastructure.restScenario( status = 200, response = loadFile("200_search_no_results.json") ) val noFacts = emptyList() assertThat(simpleSearch()).isEqualTo(noFacts) } https://github.com/dotanuki-labs/norris
class FactsDataSourceTests { @get:Rule val restInfrastructure = RestInfrastructureRule() private lateinit var dataSource: FactsDataSource @Before fun `before each test`() { val api = restInfrastructure.server.wireRestApi() dataSource = FactsDataSource(api) } @Test fun `should handle no results properly`() { restInfrastructure.restScenario( status = 200, response = loadFile("200_search_no_results.json") ) val noFacts = emptyList() assertThat(simpleSearch()).isEqualTo(noFacts) } https://github.com/dotanuki-labs/norris
Integrated tests that bring value Component tests over entire screens, capturing the asynchonicity of data operations without coupling with implementation details (eg, Coroutines)
@RunWith(AndroidJUnit4::class) class FactsActivityTests { private lateinit var screen: FakeFactsScreen @get:Rule val restInfrastructure = RestInfrastructureRule() @Before fun `before each test`() { // Switching over fake screen wrapper with DI support val testApp = TestApplication.setupWith( factsModule, factsTestModule, RestInfrastructureTestModule(restInfrastructure.server) ) screen = testApp.factsScreen() } https://github.com/dotanuki-labs/norris
@Test fun `at first lunch, should start on empty state`() { whenActivityResumed { assertThat(screen.isLinked).isTrue() val expectedStates = listOf(Idle, Loading, FactsScreenState.Empty) assertThat(screen.trackedStates).isEqualTo(expectedStates) } } // More tests @Test fun `when remote service fails, should display the error`() { restInfrastructure.restScenario(status = 503) PersistanceHelper.registerNewSearch("code") whenActivityResumed { val error = Failed(RemoteServiceIntegrationError.RemoteSystem) val expectedStates = listOf(Idle, Loading, error) assertThat(screen.trackedStates).isEqualTo(expectedStates) } } } https://github.com/dotanuki-labs/norris
@Test fun `at first lunch, should start on empty state`() { whenActivityResumed { assertThat(screen.isLinked).isTrue() val expectedStates = listOf(Idle, Loading, FactsScreenState.Empty) assertThat(screen.trackedStates).isEqualTo(expectedStates) } } // More tests @Test fun `when remote service fails, should display the error`() { restInfrastructure.restScenario(status = 503) PersistanceHelper.registerNewSearch("code") whenActivityResumed { val error = Failed(RemoteServiceIntegrationError.RemoteSystem) val expectedStates = listOf(Idle, Loading, error) assertThat(screen.trackedStates).isEqualTo(expectedStates) } } } https://github.com/dotanuki-labs/norris
Integration Test Proposed Value Single integration over data sources Ensures data transformations + error handling over the target boundary Component Tests over entire screens Enables refactoring over target screen Screenshot Tests over entire screens Ensures association between state and Views without any assertions Acceptance Exercises several components with real user flows over a production-like artefact, emulating E2E at integration level
Standards help large teams since they reduce both accidental complexity and also cognitive load when switching between contexts inside the same codebase
Purpose Recommendation Reason Regular assertions Google Truth Max compatibility for JVM + Roboletric + Instrumentation Test Runner jUnit4 Junit5 is not supported for Android/Instrumentation Screenshot Tests Karumi/Shot Nice wrapper over Facebook tooling + useful add-ons Espresso Tests Barista Nice wrapper over Espresso tooling + useful add-ons Compose Tests Standard tooling Provided since day-zero Contract Tests Pact/JVM Most adopted solution E2E 👀 Up to you, as long it runs for as part of your pipeline
What really matters is focusing on writting good tests that allow Engineers to move faster by maximizing the correlation between test code and value proposition (ie, probability of catching bugs)
TL;DR • Be aware of unit tests that don’t bring any value • If the case for you, invest energy on integration tests specially over Data Sources and Screen Components • There is no such thing like effective Espresso tests over non-release builds • Let Mocks fade away from your codebase (and your life!) • Mind the conciliation between your Test Strategy and your Quality Strategy