logo
Articleinttest

Integration testing common mistakes

Views: 976 Created: 2019-06-14 Read time: 6 minutes
Post Preview
Tags:

1)     Preface

Once you created a solution and covered it with unit tests (or the other way round if using TDD), it is usually time to verify whether different components that you have created also work in harmony. This is when the integration tests come into the picture. A developer not familiar with the subject might be a bit overwhelmed at the beginning with a plethora of strategies and tools. This tutorial is not meant to cover these in detail but rather stress out what to avoid while working with IT's. So buckle up and build up your testing automation confidence with these common IT mistakes.

 

2)     Integration testing mistake 1: database cleanup

 

Summary

 

It would seem natural and obvious that, after an integration test that alters the database, we should perform some sort of a clean-up. We have entered some records, removed a bunch and now we would like to put everything as it was at the beginning. But what was at the beginning? Are we 100% sure that we can undo the changes we made in our test method so that other tests can verify their behaviour reliably? Maybe we forgot about that there was a cascading configuration turned on? Why is John's cleaning up the database differently than I am?

 

As with application security, we should not reinvent the wheel every time we need to clean up after tests that manipulate the data on the database. We should have a centralised solution that is used by all of our tests. Thanks to that, whenever a change is made to the database scheme, there will be only one place that has to adapt accordingly. There will be no need to scout all over our IT suite to find the clean-up data, then examine it and alter if required. Remember that these kind of changes may happen weekly if not daily on more significant projects. Can you imagine the hell and drop in morale that it would cause? Your developers may even start to consider dropping the suite altogether. And these situations do happen.

 

A common practice is to put your clean-up code at the end of the test. What it prevents is a full diagnosis of failures as the, possibly, wrong data will not be in the persistent store any more. Would the data be left as it is after a failure, the developer can examine the problem in much more detail. So try to initialise a clean-up before each of the persistent layer tests start.

 

3)     Integration testing mistake 2: no actual commit

ORM with JPA is the staple when it comes to dealing with persistence in Java. It allows to describe the relational data model with the use of objects and a relatively small amount of configuration. It introduces the notion of a Persistence Context. In most cases, a persistence context is created just before a transaction is started and destroyed just after a transaction is committed. All the CRUD operations are invoked against that context which holds all the changes as a sort of buffer until the transaction is committed and all the changes are flushed into a physical database. The detailed workings of the persistence context are too broad and out of scope for this article. When it comes to testing the ORM layer though, there is one thing we should try to avoid in order to make the most out of our suite.

 

Let's take a look at an example Spring persistence layer test:

 

Test Code


import org.junit.*;
import org.junit.runner.*;
import org.springframework.boot.test.autoconfigure.orm.jpa.*;

import static org.assertj.core.api.Assertions.*;

@RunWith(SpringRunner.class)
@DataJpaTest
public class HeroRepositoryIT {

	@Autowired
	private HeroRepository heroRepository;

	@Test
	public void shouldPersistHero() throws Exception {
		heroRepository.persist(new Hero("Invicible"));
		Hero hero = heroRepository.findByUsername("Invicible");
		assertThat(user.getName()).isEqualTo("Invicible");
	}

}

@Repository
public interface HeroRepository extends CrudRepository<Hero, Integer>{

It is a basic test where we persist a Hero entity and try to retrieve it to confirm that it has been persisted. There is a slight problem here, though. When we call persist we are actually not triggering an INSERT on the physical database yet. Instead, we only store that entity in the persistence context. It is visible to the query methods like findByName, and it gives us a false impression that everything went fine and our saving method works as intended. What we are missing here is the actual INSERT into the database where a few problems might pop up which would not be unearthed while only working with the persistence context.

Problems like:

       Constraint violation.

       Actual data model being different than that which is mapped through JPA.

       Skipped cascading that should take place etc.

 

By default, all @DataJpaTest tests are wrapped up within a transaction which is rolled back at the end of the test. What we should do in this case is set the default propagation to level NOT_SUPPORTED to not have that wrapping transaction and rollback at the end. Thanks to that our heroRepository.persist method will be called in one transaction that will be flushed into the database, followed by heroRepository.findByUsername that would query the data, going into the actual database instead of only hitting the persistence context:

 


@RunWith(SpringRunner.class)
@DataJpaTest
@Transactional(propagation = Propagation.NOT_SUPPORTED)
public class HeroRepositoryIT {

Following the previous point of this article, we should not be worried about the lack of rollback at the end as each of our tests would be cleaning the database just before it starts. 

 

4)     Integration testing mistake 3: no layering

Following the previous example, our IT tests do not necessarily need to go all the way from the network layer by receiving a request right into the persistence layer by talking with the database. A large portion of our suite, of course, should have this kind of flow. We must remember though that this kind of tests take a considerable amount of time, and we most likely are not capable of exercising all the possible logical paths along the way. A lot of times, not even all of the most important ones.

 

If we look from above on a typical Java web application, it consists of the resource layer, service layers and the repository layer. This is, of course, a considerable simplification, but this is the most common pattern that the developers are faced with in the modern microservices architecture. This allows us to exploit as much as possible the border layers here: resource and the repository layer. All that done in isolation and still being an IT as:

       From a resource perspective, the request comes from a network.

       From the repository perspective, we are interacting with a physical database.

 

Spring gives us all the tools we need to isolate the testing of these layers. As we saw earlier, the @DataJpaTest enables us to utilise only the persistence layer. When it comes to the resource layer, we are given a powerful tool defined by a @WebFluxTest.

 

Test Code


import org.junit.jupiter.api.Test;
import org.junit.runner.RunWith;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.autoconfigure.web.reactive.WebFluxTest;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.reactive.server.WebTestClient;

@RunWith(SpringRunner.class)
@WebFluxTest(HeroResource.class)
public class HeroResourceIT {

	@Autowired
	private WebTestClient webTestClient;

	@Test
	public void shouldResultInBadRequest_whenRequestingHeroWithNegativeId()
		throws Exception{

		webTestClient.get()
			.uri("/heroes/-1")
			.exchange()
			.expectStatus()
			.badRequest();
	}
}

Here we isolate the HeroResource and implicitly mock all the dependent services. This gives us a chance to quickly test the parsing and validation of the incoming messages and also the interaction with the service interfaces without actually invoking them. Let us not neglect to test these border layers. They can be set up to run before the broader and more complex IT's which allows for faster failure reporting and time-saving and bug fixing.

 

5)     Integration testing mistake 4: flickering lights

A lot of times in our IT suite we would be using timeouts to make sure that individual, often basic operations do not exceed a given time of execution. Most of these tests would be run on a real web server and connect to the database and/or other remote resources. What might, and most likely will happen at some point, is that failures will start to occur with no particular reason whatsoever due to a timeout being exceeded. The worst thing that we could do at this point is to just leave these out, or worse, ignore. Then approach them with a label:

''This one is simply like that from time to time, it is it's nature''.

 

We should not label any test like that. At some point, as our test suite grows, the number of this kind of situations will grow along. Having a mindset that these failures simply occur every now and then would lead to a rapid decline of trust in the suite itself and general developer morale. Yes, that diminishes when we don't see green most of the time in our CI pipeline. Apart from that, we are most likely missing potentially critical bugs which are not visible to an eye that much but can be deadly in production:

       Wrong configuration of a third-party library

       Too large transactions

       Not the latest version of a database driver.

The list might go on and on.

 

There is only one thing needed for a disaster to happen. That is why we should strive to investigate those random timeout failures deeply. They might uncover something really nasty creeping around our application.

 

6)     Conclusion

 

Summary

 

 

We have gone through some of the more significant pitfalls of integration testing. Now, go out there, verify, and make sure that your components did reach their full potential and are capable of working in harmony!

Need more insight?
Repository
Repository
Glossary
Glossary
Tags:
Reference
You may also like:
unit-testing-common-mistakes
Unit testing common mistakes
legacy-code-testing-common-mistakes
Legacy code testing common mistakes
tdd-common-mistakes
TDD Common Mistakes
Comments
Be the first to comment.
Leave a comment