How to get involved earlier in the software development life cycle: be involved!

Once the process helps us to focus on fewer things and enables us to collaborate and test as a team, I as a “tester” will have more time. And I should spend that time where the product is actually baked: I should get more involved with the developers.

Many QAs are a bit hesitant to move somewhere close to the actual application code. But they shouldn’t be: our main contribution is not that we can code. There are other people who are much more specialised in. We call them developer. They are much better coders. And as such, much better suited to write automated tests. What is then left for a QA?

We bring very strong analytical skills. One of them can analyse all tests on all levels from a strategic point of view. What is the share of unit- integration and end-to-end tests? Where do we cover what business logic, where do we have gaps in our test coverage.

Many QAs who specialize in Front-End-Test-Automation write lots and lots of End-To-End / User-Journey tests. These tests are typically the most flaky, hard to maintain and cost intensive tests you can imagine. Hence, we usually advocate to add as few of them as late as possible.

Screen Shot 2019-02-15 at 20.09.11.png
Picture: 100 End-2-End test to rule them all

Instead QAs should aim to understand the big picture of the system architecture: what services do we have? How are they connected? How are they split? Does each service have a distinct capability? If so what is it and how does it relate to you business?

Once you figured that out you can assess how to test each of these services or domains in your service independently. If this works have a look at the communication between the domains and ensure this. If all of this is covered you may want to add a slight hint of an end-to-end test on top.

Screen Shot 2019-02-15 at 20.11.06.png

Obviously, the second approach is much more difficult – but that is what you are for and what you contribute in the team? You are the one to keep the big picture and consult your team members where to add what test in what way. What is the key assertion in a given test case? What is the right level for it? With a strong coder and your analytical abilities in testing you can ensure that things are working while they are implemented. That does not only improve quality early on, it also significantly decreases the time you need to spend testing (and reporting defects) afterwards.

Still, some defects will be released. No matter how much money you invest, there is no way to ensure bug free software at all (even if you are the NASA and spend more than 320 Million $ on a project). The second thing you can figure out with your dev team is how to identify and catch them. With the lean process (see above) that you established you can be sure to ship a potential fix very fast. The way to detect them is a helpful monitoring setup. This involves to visualise the amount of errors, as much as server/database request (and possibly a deviation to 24hours before). If you go to real professional levels you want to think about anomaly detection so that your system can notify you on its own once something is off.

Screen Shot 2019-02-15 at 20.12.49.png

The last open question is how to react to breaking changes that you may have accidentally released to production. We are running a two fold strategy here. We try to minimize our time-to-market for bug fixes and mitigate risks of other larger issues with feature toggles. Let me go into some details:

Usually, in a classic release management process you have a plan how to do your releases and if there are major problems afterwards you execute a predefined rollback. If this is – for one reason or the other – not possible there is usually a hot-fix-release-branch-deploy process defined by someone somewhere. Here is the problem: If you need a hot fix then your team is probably already on fire. In this very moment you need to follow a process that most people are unfamiliar with which usually bypasses a lot of security measures you have previously established in your release cycle. That is quite a bad habit concerning the production problems one has in this very moment.

Our goal is to make the fasted possible release process our standard process. Thus we drive our teams to deploy every single commit to production. That also means that every commit has to be shippable – with enough tests to make sure our product is still working. This is baking quality in already!

Screen Shot 2019-02-15 at 20.15.03.png
wikimedia.org

Still, things will break. But with a quick way to react and deploy a fix we do not even need rollback strategies any more. But being able to deploy a hot fix very quickly implies that you can also quickly analyse the root cause. But that is not always true. If you know what commit was faulty you can of course deploy a revert. But sometimes a new feature in it’s complexity across stories and commits is just not working right. Thus, we work a lot with feature toggles, making sure that all new functionalities are toggled off in production. We also make sure that we can toggle those features independent of deployments. Thus, we decouple the technical risk of a deploy with the business risk of a feature toggle. This reduces our needs for reverts by about 90% and most deployments run automatically and without problems. Every few days a new feature is toggled on. People are then usually more aware of the situation and monitor the apps behaviour more closely (at least they should). Problems can then be identified and either quickly be fixed with a tiny commit or, if you encounter major blockers, you toggle the feature off again.

In conclusion, we have way fewer way less troublesome releases, while we can activate new features in a very controlled way. Thus, we do not only deliver value fast, we also achieve higher quality at the same time. However, a very experienced and disciplined team is needed to work on such a high level of quality commit by commit.

How To: Consumer Driven Contract Tests

In my last post I wrote about how to test micro services. When I wrote about the most difficult part of the testing, the integration, I said that it would blow the size of a post to go into the necessary details. I am sorry and I want to make up for it here.

When we design and create the interaction (and thus integration) of different services with each other we rely very much on the principles of “Consumer Driven Contracts“, in short CDCs. These Contracts can be tested. While the basic idea seems simple (“let other test the things of our service they think they need”) it implies a change in the way how we think of who-tests-what. So I want to spend some time explaining the concept step by step in many pictures:

Imagine you are part of a team (the blue team) and work with another team (the purple team). The purple team’s service will ask the service of the blue team about the email address of a specific user (ID). They agreed – for the sake of the argument – to communicate in json format via a RESTfull interface. In other words: the consumer (purple team) has a contract with the provider (blue team).

Now our friends in the purple team want to make sure, that the blue team will always give them an email as an answer. The first important thing is that the purple team needs to trust the blue team in figuring out the right email address for the given user. If the purple team starts to validate the blue team’s results for the actual content you have a lack of trust. This is a different, much more severe problem. Writing CDCs does not help you to solve this.

Hence, the purple team tests that they get any answer and that it contains an email address (at all). They will not test which one.

Screen Shot 2019-02-15 at 20.46.36.png

So assuming the blue team does it all right, the purple team would then just write a test that sends any ID and gets back any email in a valid json.

Now, whenever the purple team wants to know if the contract is still valid they let the test run. But then think about when the contract would potentially break: in this picture the contract will only be broken if the blue team introduces (unintended, not communicated) changes to their service. So isn’t it a much better idea to run this specific test whenever there is a change to the service of the blue team?

Screen Shot 2019-02-15 at 20.47.55.png

And this is what we do: the purple team – the consumer – drives the the contract to make sure it holds up to the agreed functionality of the blue team’s service. In this way – having the test of the purple team in the CI-System of the blue team – the blue team can make sure to not accidentally break the contract.

A few days later, when everything works smoothly and both teams feel comfortable, the blue team does not need the purple team any more to do a release. The blue team knows that the test they received from the purple team discovers all flaws. The purple team can concentrate on something else.

Screen Shot 2019-02-15 at 20.50.14.png

This is it! This is the basic idea of the CDC. The purple team writes and maintains the test. But the blue team executes it. If the request would go the other way around (the blue team sends an email and expects back an ID), then the test would be written by the blue team and executed by the purple team.

But this is only the start. Of course, the blue team has more than only one service. And there is not just the other purple team, but many teams doing requests to the different services of the blue team. Now the blue team members have to be quite disciplined and ask the other teams for all the CDCs. This sound a bit like an overhead and quite exhausting. But here comes the big benefit: the blue team is in all its work (and more important: deployments) independent of the other teams and independent of the availability of the other teams services in test environments.

Anyone who tried to get multiple services of various teams running in different (test) environments will smile now knowing of all the pain this usually inflicts. In this picture the blue team mitigated this risk/problem.

Screen Shot 2019-02-15 at 20.48.57.png

While the blue team is doing quite well there is more to it. Being independent of other teams and their services is nice. But still you have dependencies between your own single services. You should also resolve those. Cover the connections between the services of the team with CDC tests, too. It will give you:

  • fast feedback (surprise!)
  • independent deploys
  • reduced complexity

Each of those three points on its own leads to a quicker development of features and a reduce in cycle time. As I get the combined benefit in this situation there is nothing I would argue against, so lets do it!

If all tests are in place, each service of the blue team is totally independent. It can be deployed at any time and still make sure the other services work properly.

Screen Shot 2019-02-15 at 20.49.46.png

And did you realize what this means? We do not need a big, heavy, error prone and expensive set of selenium end-to-end tests. We mitigated so many risks already and can be sure of interactions that are working.

You cannot test anything here, but certainly a lot. In any case, improving a lower layer of the testing pyramid usually implies that you can reduce something at the top. That means: less end-2-end test => less time for maintenance => more time to grow out of the role as a classic test manager.

Let’s embrace this!