iOS: Test Pyramid
November 9, 2016
At LinkedIn, our focus is on 3x3 iOS releases (3 hours from committing code to a released product, 3 times a day). To enable successful releases with fast 3x3 cycles, it is very crucial to have the right test strategy, which not only has strong coverage at each layer but is also optimized for commit-to-publish time.
Testing iOS code can be challenging when executing 3x3 releases. We have tackled this problem by separating tests into three general categories: Unit Tests, Layout Tests, and Scenario Tests.
We run our unit tests, layout tests, and UI tests on the generated production binaries and on several different versions of iOS. A build is certified as “known good” once all of these tests are passed and the build is published as an Alpha release build.
Typically, for good craftsmanship, we want many more unit tests than layout tests, and we want more layout tests than scenario tests. This configuration concept is called the Test Pyramid.
Test Pyramid approach
The Test Pyramid concept is that you should have many more low-level unit tests than other types of tests. The reason is not only because unit tests are much faster to run, but also because they catch problems much closer to the point where they are generated, making debugging and troubleshooting more efficient. An ideal test coverage breakdown is: unit tests >> layout tests >> scenario tests.
The below diagram illustrates what testing traditional MVC (Model-View-Controller) models or mainstream, existing patterns like MVP (Model-View-Presenter) or MVVM (Model-View-View-Model) looks like when using the Test Pyramid approach.
Model: responsible for domain data or a DAL (Data Abstraction Layer) that manipulates the data.
Presenter: acts upon the model and the view. It retrieves data from repositories (the model), and formats it for display in the view.
View: a passive interface that displays data (the model) and routes user commands (events) to the presenter to act upon that data.
ViewController: all application logic except for model and view could end up in the ViewController.
ViewModel: the glue between model and view.
Using unit tests
Concurrent unit test development side-by-side with app code development is essential to the success of a product and helps increase confidence in the code. It also makes the code more reliable, safer, and easier to debug.
In unit testing, you implement small and granular tests for a certain unit of code that includes business logic, the ViewModel classes, and methods that interact with the ViewController, which helps you find most of the bugs early on in the process. Unit tests help validating input/output logics by inflating data to the ViewModel, verifying ActionHandlers States/Network requests.
For example, let's look at the below code unit testing an endorsement skills notification card with an unexpected urn format. Note that I'm assuming that the unexpected urn format is handled and cannot fail.
Using layout tests
To create a layout test, you write a unit test that specifically tests the layout of a view in multiple different configurations, with different data combinations, and with different view sizes. Layout tests will mock the ViewModel and test the view (layout/cell) with various inputs, such as very long and very short numbers, null, empty strings, non-ASCII characters, or multi-language strings. Image dragging, color, save, crop, and pinch gestures can also be tested in layout tests.
For example, let's look at the below code testing layout of a notification card with cell data. Note that I'm verifying the card layout rendering using API calls as overlap, isAbove, Alignment rendering on the view.
Using scenario (UI) tests
Scenario Tests are built on top of the XCTest Framework and Kif which supports UI functional testing, infinite scroll validations, checkpoint validation, network request auditing, tracking validation, recording, and screenshots. Scenario testing is also capable of verifying Negative/Null/Error states of a particular use case.
In scenario tests, a fixture server is used to intercept network requests and provide mock server responses for testing use cases listed below:
Making network requests without Wi-Fi;
Returning a specific response for a URL to test if the text in a label is correct;
Faking a failure response.
For example, let's look at the below code which is a scenario test for profile view notification where we tap on the card button, verifying navigation flow along with validating the tracking fired on landing page.
Let's talk about a particular example, where the member tried to add/edit a publication on her profile.
Before refactor: As you can see in the below comparison table, before refactor, we had seven scenario tests implemented to verify add/edit success, add/edit failure, delete, tracking, publications order, and error handling cases. This test strategy is less reliable. Note that unit tests coverage is less extensive, which means fewer business logic validations.
Problem: Scenario tests were flaky and unreliable, with a high maintenance cost. There are gaps left uncovered by tests because of low unit-level coverage (which includes granular code testing as well as layout testing). Commit-to-deploy time was unpredictable, with less confident code published to production.
Objective: Apply Test Pyramid guidelines and refactor “Publication Edit” tests to have more coverage in the unit and layout test layers. Limit the scenario tests to tracking/navigation/screenshot validations only.
After refactor: We have distributed the test coverage as per the pyramid strategy with unit tests >> layout tests >> scenario tests. Rather than having one scenario test for testing the member adding a new publication and another scenario test for testing the deleting of a publication, we added multiple unit tests that each test a different use case: adding a publication, editing a publication, deleting a publication, error handling, and layout validations. This allows us to have much more confidence in all the business logic we wrote related to member interactions with publications.
Thanks to Jacek Suliga, Keqiu Hu, Joey Bai, Mayur Motgi, Justin Newitter, Austin Lu, and Samish Chandra Kolli for reviewing this post and providing useful feedback.