× Christian Nissen's blog Directror's Cut iOS app Christian Nissen on Xing

ⓘ This blog article has been moved to medium.com

You will be redirected shortly

Koala

Integrating integration test automation into application code

by Christian Nissen

5 min. read

In this short article you will be given some insights on the advantages of implementing automated integration tests within the system user test's (SUT) code base and how to compromise it's disadvantages.

The greatest and most obvious advantage is the possibility to execute tests within the SUT's development environment and build pipeline. This way you can avoid third party integrations and disruptions in the tool chain. Coders are quite used to running unit tests as part of the daily business. Why not include integration tests?



Services with integrated test automation reporting to central test management tool

Fig. 1: Results from each services's integrated tests are merged into a central test management tool.


Writing automated tests within an application's code base facilitates the test's transparency among involved developers and breaks the barrier often separating development from testing. Moreover, this lets developers easily support test automation engineers on technical challenges, using the same technical environment. Let's be honest; developers are the better developers...

A further advantage is the simple possibility to keep integration tests compatible with changes made in the application. With GIT for example, by using one and the same branch for the changes in the application and necessary changes within the tests they can be kept in sync.

But are there really only positive sides to this?

An undeniable fact is, that when replacing a (micro) service, all previously integrated tests are equally in need of being replaced, or rather being rewritten.

Implementing an independent test automation on the other hand gives you a head start when software is rewritten. Usually the code is replaced, keeping the business requirements in place. Thus, the expected behavior remains unchanged, which is often defined by ‘it should work as before'. In this situation a good requirements management is perfect, but rarely in place. So a solid and fully automated test coverage is an ideal basis to support turning your tech stack inside out.

How do you keep track of your test coverage, if tests are distributed across the entire architecture?

A central test case management tool, which is fed by all test automations will inform you on the current test coverage and let you sleep through the night. But keep in mind, that you will have to feed each individual test coverage report into this central tool. This will require quite a number of integrations or rather transformations.



Application build pipeline triggering external test automation

Fig. 2: The application build pipeline triggers the tests, which return the results to the triggering pipeline.


A further challenge, when aiming at integrating tests into the application code, is supporting each programming language and their respective test frameworks. Usually, the number of test engineers is notably smaller than the horde of developers. Thus, it is up to a few souls, to support a vast tech stack. This can be met of course by limiting the tech stack within a development department, which sounds worse than it is. Most en vogue technologies tend to lose support quite early and are replaced by the next. Recruiting is great while you can say "implement it in whatever tech you want", but can become a hassle when the candidate you are hiring next is required to support this outlandish programming language introduced by the predecessor, that is unfortunately no longer hip.

Last, but not least, do we really want all that "unproductive" test code shipped to production?

This is a tradeoff, that has to be accepted when integrating test code and is blindly accepted regarding unit tests. And don't even think of dirty tricks to remove test code before shipping it. You should never run a different version of an application, than the one you have tested.

So, in the end, what is better?

As often in life, the answer is neither the one nor the other, but a healthy mixture of both. Why miss out on advantages or live with disadvantages, if you can get the best of both worlds?

Make sure to implement sufficient integrated tests to ensure quality from within the applications code base, allowing developers to easily run tests and get early feedback while coding. Simultaneously set up an adequate number of tests in a separate project, to support an applications replacement without losing it's test coverage. To avoid test duplication, you can categorize your tests and distribute them.

By implementing a central test management, you can keep an eye on the test coverage of each individual component as well as the overall status, combining the integrated and external rests.

Try to limit the tech stack used in development, allowing involved testers and coders to cover all applications with an adequate set of integrated automated tests.

Conclusion

By following the approach above, you can make use of the advantages of both integrated and external test automation, without duplicating tests. You can provide an adequate test coverage without having to support an endless number of programming languages. Last but not least you can accompany major refactoring in your service line up, while keeping the overview on your overall test coverage and status.



Combined intgerated and external test automation

Fig. 3: Results of the integrated and external tests are fed into a central test management tool.