Almost everywhere I worked management (including QA managers) and developers misunderstood the role of test automation and how it works. In this post, I will try to explain what I mean by that, but before I do that let’s see how it is usually seen by non-automators.
The most common view on E2E test automation is that it is some kinda magic key to catch all the bugs. Although it is not entirely wrong, it’s not entirely true either. Throughout my career, I have heard lots of times
if only we had automation we would catch that critical bug that went to production. The problem with this expectation is misunderstanding where the majority of bugs come from and what is covered by test automation:
- Majority of bugs coming from unknown areas (new features, integrations, and dependencies that have been updated)
- Automation usually covers known areas (something that already worked properly or has been fixed) and ensures the regression didn’t happen during SDLC
When a company hires test automation engineers they usually have lengthy regression suites that are executed manually. The goal of hiring SDET is to offload the team from executing manual regression tests and increase feature delivery speed. This is not wrong, but it leads to a rush desire to get it all covered. The test automation engineer gets a list of manual test cases that need to be covered by automation. If the engineer has little to non-understanding of the business logic of the application he will most likely just follow test steps in the test cases that have been tested before and worked as expected. The problem with this approach is that engineer does not think outside of the box and not trying to explore beyond the scripted cases, as a manager you would really want to avoid it. Instead, when SDET not only covers scripted tests but also knows and understands the business logic then he would also notice things that are not right, and discover the unknown. Most of the bugs that test automation engineer find is when he actually write the tests and not when he runs them. Yes occasionally you will still catch some bugs during test execution, but it would not be as many as when you coding your tests.
So here is what managers should expect when they hire SDET:
- 80% of the bugs will be found by automation during the time when the engineer writes tests (and if it is not, then you hired the wrong person)
- 20% of the bugs will be caught during tests execution (smoke, regression tests)
So what can we do to deliver better quality of the application?
If the company has a monitoring or alerting tool (DataDog, ELK, Splunk) and the automation can be run in an isolated environment (separate environment, at night when nobody using the app) then you should defiantly pair up automation and the observability tool. At the company I work for I have added the integration of a test framework and monitoring tool, that allows us not only to run the E2E tests but also to observe the services during test execution (and trigger the alerts) to get a better understanding of where the potential problem can come from.
Another good approach is shifting left, don’t expect that SDET going to cover everything, get developers to help out with testing. This way they not only going to help with adding more tests, and cover the features that they work on, but they also going to help improve the application’s testability, so more testing can be added in a more efficient way.
Test earlier and more often, this one is a little tricky when your CI/CD is cloud-based. Yes, you can set up the E2E to run on every pull request, and then if new changes are pushed you can run the tests again. It’s sounds like a great idea and that’s what most managers want, the problem here it is going to take a while to run through the whole regression suite if you run the tests on every push to a pull request and it is going to get expensive. So what to do to reduce time and cost, so far the best approach I have come up with is to block the merge until E2E tests are executed. Setup your pull request jobs so it needs to be executed on demand and it is required before the merge, when the engineer is done with the work and the feature was verified on the isolated environment then run the E2E, once it is passed you are good to merge your changes. On PR only run the regression tests in the areas related to changes, and run full regression tests on a scheduled basis (during scheduled release or nightly)
I hope this blog post was useful and helped to bring a better understanding of how test automation works and manages expectations. Feel free to reach out if you have any questions