This series takes a deeper look at how our engineers build digital financial services. Learn more about how they address large-scale technical challenges at Tala.
By: M. Silva, Lead Android Engineer
Developers tend to have many feelings about automated tests. Many of them would like to avoid them altogether. Others are rigid and dogmatic about what a test should be and what it should cover. No matter the feeling, we have to stop and think about why we test in the first place. I mean we’re programmers after all, right? Most of us care about efficiency and streamlining mundane tasks. We also have to think about the alternative to automated testing, which of course is manual testing. Manual testing takes a lot of time and effort and, because we’re human, is prone to errors. While difficult, if not impossible, to eliminate manual testing, we can certainly strive to minimize it.
It’s important to realize in any aspect of software development, there are no right or wrong answers — no prescription to get you the perfect solution. There are simply more effective and less effective ways of doing things based on the current context and desired outcome. Therefore, it stands to reason that a level of open-mindedness and experimentation is required to find the appropriate solution. It is also important to note effectiveness and success are not portable; what worked well in one context does not necessarily guarantee success in another. So I suggest striving for pragmatism and not idealism.
The “Rules” of Testing
Now, let’s talk about testing. Oftentimes testing is referred to with stark delineations. There are unit tests, integration tests, end-to-end tests, UI tests, etc. I find these delineations to be counterproductive when used as the basis for how tests are written, especially in the case of unit tests. Most object-oriented developers that I’ve met consider unit testing to be testing the functionality of an object and that each unit test corresponds to a property on that object. While this makes initial sense, after working in this style, you begin to realize that refactoring and making structural changes to the code requires refactoring the tests as well. Most of the time this isn’t the desire. Instead, it is typical to want to keep the same functionality and therefore the same test coverage of that functionality over the system, but perhaps add functionality to or restructure aspects of the implementation.
The Value of Test-Driven Development
I believe that writing unit tests in the traditional way is a pathway to test-induced damage. Production code is contorted to facilitate testing at the method level and also leads to testing very specific implementation details as well, which results in the coupling I described previously. So what can we do to avoid this? Revisit the value proposition of automated testing and apply testing in a way that is aligned with that proposition. The value proposition of testing is, yes, to ensure correctness, but also to get fast feedback on something that isn’t working as intended or to understand how something will behave. This is where disciplines like test-driven development (“TDD”) come into play. In TDD, you write tests asserting what the expected behavior of the system will be and then write the code to make it happen. While unintuitive for most of us, this is actually quite effective, albeit difficult to attain or master as a discipline – it takes a long time to do it well. The trade-off is getting fast feedback rather than waiting to get it until after implementation.
Pseudo-TDD to Collect Fast Feedback
The way I’ve recently started writing tests is in a pseudo-TDD style. My approach, however, is to consider the unit in the unit test to be a series of classes or methods that work together to make something happen, like a module, for example. I focus on the API of the module, and I test in terms of the input and output of the module. I do not try to test the internal details of the module; but instead, observable results of the work done by a module. This allows me to rearrange the structure of my module and not affect the tests. It also allows me to focus on what the code is doing instead of how.
// This test invokes the public API of the module and checks the output test("returns success on valid response") { fetchAccountTest( okhttpResponse = { setJsonResponse { createActiveAccountJson("active") } }, expect = IsSuccess { moduleOutput -> moduleOutput.shouldBeInstanceOf<ActiveAccount>() } ) }
In addition, for the benefit of my understanding, I sometimes write tests that will test some behavior of an implementation detail (e.g., serialization) and after confirming that behavior, I will remove those tests.
// Check to see that I'm using it correctly then I can delete. test("make sure polymorphic serialization in kotlinx.serialization works as expected") { try { Serializer.decodeFromString<AccountActive>("""{"status": "active"}""") } catch (e: Exception) { fail("$e, not properly serialized.") } }
On occasion, I also write tests that interact with external API’s via HTTP or databases in an effort to ensure efficacy. This way of unit testing is entirely valid especially if it gives the developer fast feedback they are doing the intended thing.
interface StarWarsRetrofit { @GET("people/1") suspend fun getLuke(): ResponseBody } // Test a network call, database, or anything that gets you closer to building what you want test("does api return what I expect?") { val retrofit: StarWarsRetrofit = Retrofit.Builder().baseUrl("https://swapi.dev/api/") .client(OkHttpClient()) .build() .create() val luke = retrofit.getLuke() println(luke.string()) } // output : {"name":"Luke Skywalker", ...}
We’re optimizing for higher fidelity in the tests we write and therefore their usefulness because they’ll actually fail if our implementation is not producing the intended result.
In summary, there’s no right way of going about testing, and there isn’t much value in trying to assert such a thing. It is best to try to achieve the value proposition of automated testing, and change how you write tests so that they don’t get in the way of achieving fast feedback.