Tools

How we saved over $10,000 in 3 months on app development with automated testing

One of our QA engineers has decided to tell about one of the testing challenges he faced at EGO. When the architecture changes from a monolith to microservices, it brings a testing hell along.

July 8, 2019

Are you intrigued with the headline and want the same results with your product? Then pay close attention to this success story of using automated tests on our client’s project.

If you’re not familiar with automated tests, I’ll describe it in a nutshell. Automated tests allow you to run a check of your application without having to do it yourself. 

Usually such tests are developed by software engineers or QA engineers based on project requirements. These require resources for development, but they have their benefit: namely, they save time. 

Here’s a real example of a situation I was faced with. The project had started as a RESTful web service without graphic interface and it goes well with manual testing for a few months, but as we added more and more business rules and entities we decided to change the project architecture. Our monolith project was then turned into seven microservices.

With that we entered testing hell. Imagine you need to test how one of your entities in updating (for example, User), but in order to do that, you need create a bunch of other related entities first (Company, Domain, District, etc) and each of them has their own business rules and dependencies. This turns into hours and hours of testing.

Here we decided to try automated tests. The main criteria was an ability to write tests by QA and verify them by product owner.

So we decided to use SpecFlow - a .NET version of popular tool for automation (Cucumber). It has a human-readable look and is easy to develop and integrate into a continuous integration/deployment process.

You may be wondering what it looks like - despite it being a Gherkin-language code, it may be reviewed by anyone.

Here’s an example:

The good thing here is that each step of each test may be reusable in other tests, so more tests means less code for a QA or developer to write. So here we should have a few steps (methods) for each execution command from our test. Luckily, you are able to reuse written steps in other tests. So as a result , in order to get


`<p>CODE: https://gist.github.com/VladMogwai/d7e6355856cf0ddd392d67c262473e74.js</p>`

As you can see, some of the code is marked by gray. For an example I take a test which covers one business rule (scenario) of creating a new User process. Because each User should belong to an Enterprise, the first step will be creating a new Enterprise through its own microservice. For our case I am assuming that the creation of Enterprise scenarios are covered already - this means that its steps can be reused in User microservice tests. This is reflected in the above code where gray means that this code was created in other scenarios and has been reused here, so for implementing an example User creation test, you do not have to spend time writing a creation User’s Enterprise first - that is already written - just use it and build your tests on top of what you have.

In order to make test run results more human readable, we used the GHPReporter tool. Here’s an example of a test run result:

So what did we get?

We spent a few weeks finding the best way to satisfy our needs and implement these kinds of automated tests. As the project keeps growing, it obtains more and more business rules and dependencies, and tests are running each update on developing branches and sometimes by developers on their side to be sure that new changes do not break anything.

The benefit we got out of these efforts was a significantly reduced time for smoke and acceptance testing, as well as rising testing quality. Because of the constantly rising amount of business rules, it’s very hard to keep all of them in mind for the long term, and even searching through a ton of documentation, each with a new version with new changes, takes a lot of time and effort. Despite the fact that automated tests are short and structured by concrete features and always have the “most recent” approved version of business rules as direct flow commands and expected results for each.

How much did we save? That’s hard to calculate, because after we started using automated tests, the project began to grow and we never tried to test it again manually. But in the “before” and “after” we found that automated tests covered five microservices smoke tests, the implementation of which took around 80 hours, and the whole test run took 4 minutes while the same manual test run took 6-8 hours.  

Keep in mind that the complexity of features was growing, as well as the number of services involved in communication and business rules, which significantly increase the effort required for manual work.

The overall time we saved thanks to automated tests was about 500 hours. Moreover, the product quality was increased. We didn’t count the specific budget savings, but you can multiply 500 hours by your team’s average rate and be impressed by the figures. Don’t forget that it doesn't include the benefits of a quicker product release.

So if you are working on a complex microservice product and the testing takes a lot of effort, consider automating your testing.

Good luck!


LIKE THIS ARTICLE? Help us SPREAD THE WORD.

More Articles

Back to blog