Better, Faster, Stronger – Delivering High Quality Products

Did you know you can enable your team to build better software faster while having a stronger team culture? Too good to be true?
In recent years, agile has influenced early involvement of testing in the development cycle. With this more and more testers are testing new functionality as soon as a commit is pushed. Yet such teams still fail to deliver high quality software. Why? What is missing?
Working with various diverse teams across multiple projects, Finn realised that testing doesn’t actually improve software quality. It’s just a bar assuring a certain level of quality that already exists. In order to actually improve we must get involved into much more than simply testing and think about the product as a whole.
In this session, Finn will share specific examples of how engaging with the business, engineering, process optimisation as well as the entire cross-functional team can lead to significant improvements in the product’s quality. At the end of the talk, you will know how to start with a holistic approach to improving product quality throughout the entire software delivery lifecycle.

— this is a talk I am currently doing on various conferences. Some people asked me if I can share the slides, but the files are just a bit too large. So instead I decided to include a recording of the talk (with the slides) here, as some of them only play well with the presentation.

How establishing a trustful error culture in your team gives you the final boost in quality

As pointed out before: With continuous deployments the time from commit until live for any commit is the time you need until a bug can be fixed – at best! How can you push on this limit?

It usually requires that all people are aware of the code, the business, the infrastructure, the test and the pipeline. If each person has all this knowledge only then can you react quickly, without gathering the “right” people to ship a fix. In other words: you need a high performing and truly cross functional team. Let me emphasise this here once more: from our perspective there is a clear advantage for the quality of the product here: it is about reacting to (breaking) changes (even) faster.

What is the leverage you as quality advocate have? Talking about errors! Your team will do mistakes, this is good. We would be out of jobs if they did not. We are here to find them and to help the team to prevent them going forward. We are the experts on errors – in a way. So make sure that your team does not waste a mistake, but learns from it.

Screen Shot 2019-02-07 at 10.35.35.png

Idea for image originally from: http://ww2.kqed.org/mindshift

There are some mistakes that your team will be doing:

  1. Sloppy mistakes. This is the most common. There are probably a dozen of them in this very blog post. The human brain can  only concentrate for about 10-15 minutes, until it needs a mini break. If we skip that break, we tend to do small mistakes.
    A typical mitigation strategy for this is pairing of any two people. When one is not fully concentrated for a moment the other one most often is.
  2. Aha-Moment Mistakes. This is when you encounter a (small) new learning by doing a mistake and finding out about it. It may happen if you understood something by reading or even much more so while explaining something.
    We also mitigate this by pairing – ideally by different skilled developers. If a more senior person explains a lot to a more junior person both have a higher chance to do this error – and learn something from it right away.
  3. Stretch Mistakes. In this scenario, we are quite away that we step out of our comfort zone. But when you need to or want to try something new you have to at least try, even if you already know that it’s more error prone than business as usual.
    Our way around such requirements are so called “spikes”: small, time boxed stories that make sure we can trial-and-error in a save environment. Thus, the result of the spike can be a small prototype that is thrown away. A new service that got some basic things just working fine. Or a branch that is ready for an all-team-code review.
  4. There are also some high-stakes mistakes. Those are equally risky as the stretch-mistakes but usually there is much less to gain. Those are not worth the effort. Prevent your team from doing these.

There are probably many more ways to categorise errors. There are also many more mitigation strategies, i.e. in our case we have the safety nets of our test-driven development. But differentiating the different types from one another is something that is typically quite easy for analysts who mostly work in the field of quality.

By encouraging your team to do errors (in a safe environment) you automatically get the basics right for a great error culture. And as soon as the impact of an error is reduced people will be much less afraid of doing errors. And if you are less afraid at work you are usually more creative. If a broken built is then nothing bad or painful (while still an urgent matter!) you will have a more relaxed, trustworthy and creative atmosphere. And guess what – in the end a lot less error happen in such an environment. Another boost for Quality!

The last fine tuning is to make sure your team has fun. Fun? Really? Yes.
Just like the point before: in an environment where people like to come to work, where they are happy to communicate and interact, where it’s fun to get work done, people will also be more focused and more passionate about what they do. As a result, they will do less errors with smaller impact. The ultimate boost for Quality!

Joint forces of the analysts: improving the quality of software even before its built

Sometimes people say that QAs are the headlights of a project. The developers are the train engine and the BAs are the map of the surrounding area – they know where the train in the night shall ride. And it will somehow get moving. But with headlights that inform you early on about potential obstacles you can maybe find another route on your map.

Screen Shot 2019-02-15 at 20.05.02.png

Source: wallpaperpulse.com

Sometimes people say that QAs and BAs and devs work on the same product – just in different times: The BAs have really good idea what is coming next. They are constantly thinking about the product in one month – rather in the future. Devs are working on the product of the next days, max. next week – thus the immediate future. And you as a QA usually know best how the product behaves today – you know about the present product.

When it behaves unintended you talk to the same stakeholder as a BA who identifies intended behavior changes to be implemented. It is incredible useful to bring these perspectives together!

Usually, QAs know their way around today’s product in details and can point out weaknesses that can be fixed with new requirements as well as shortcuts when introducing new features. That is all incredible valuable for the BA. The outcome for the QA? If you consequently collaborate on analysis and story writing (to some extent, it’s not necessary to type the story together) you can think about weird edge cases and scenarios already long before the story is implemented and discuss the desired behavior with the BA right away. Baking quality in can be so timesaving!

Again, you cannot make up all possibilities in your head, you will still need to conduct some creative, exploratory testing. But by working with BAs at the beginning you reduced this effort by 60, 70 or sometimes 80%. Your work load after development decreases, because quality already increases before the software is developed. Well done!

How to get involved earlier in the software development life cycle: be involved!

Once the process helps us to focus on fewer things and enables us to collaborate and test as a team, I as a “tester” will have more time. And I should spend that time where the product is actually baked: I should get more involved with the developers.

Many QAs are a bit hesitant to move somewhere close to the actual application code. But they shouldn’t be: our main contribution is not that we can code. There are other people who are much more specialised in. We call them developer. They are much better coders. And as such, much better suited to write automated tests. What is then left for a QA?

We bring very strong analytical skills. One of them can analyse all tests on all levels from a strategic point of view. What is the share of unit- integration and end-to-end tests? Where do we cover what business logic, where do we have gaps in our test coverage.

Many QAs who specialize in Front-End-Test-Automation write lots and lots of End-To-End / User-Journey tests. These tests are typically the most flaky, hard to maintain and cost intensive tests you can imagine. Hence, we usually advocate to add as few of them as late as possible.

Screen Shot 2019-02-15 at 20.09.11.png
Picture: 100 End-2-End test to rule them all

Instead QAs should aim to understand the big picture of the system architecture: what services do we have? How are they connected? How are they split? Does each service have a distinct capability? If so what is it and how does it relate to you business?

Once you figured that out you can assess how to test each of these services or domains in your service independently. If this works have a look at the communication between the domains and ensure this. If all of this is covered you may want to add a slight hint of an end-to-end test on top.

Screen Shot 2019-02-15 at 20.11.06.png

Obviously, the second approach is much more difficult – but that is what you are for and what you contribute in the team? You are the one to keep the big picture and consult your team members where to add what test in what way. What is the key assertion in a given test case? What is the right level for it? With a strong coder and your analytical abilities in testing you can ensure that things are working while they are implemented. That does not only improve quality early on, it also significantly decreases the time you need to spend testing (and reporting defects) afterwards.

Still, some defects will be released. No matter how much money you invest, there is no way to ensure bug free software at all (even if you are the NASA and spend more than 320 Million $ on a project). The second thing you can figure out with your dev team is how to identify and catch them. With the lean process (see above) that you established you can be sure to ship a potential fix very fast. The way to detect them is a helpful monitoring setup. This involves to visualise the amount of errors, as much as server/database request (and possibly a deviation to 24hours before). If you go to real professional levels you want to think about anomaly detection so that your system can notify you on its own once something is off.

Screen Shot 2019-02-15 at 20.12.49.png

The last open question is how to react to breaking changes that you may have accidentally released to production. We are running a two fold strategy here. We try to minimize our time-to-market for bug fixes and mitigate risks of other larger issues with feature toggles. Let me go into some details:

Usually, in a classic release management process you have a plan how to do your releases and if there are major problems afterwards you execute a predefined rollback. If this is – for one reason or the other – not possible there is usually a hot-fix-release-branch-deploy process defined by someone somewhere. Here is the problem: If you need a hot fix then your team is probably already on fire. In this very moment you need to follow a process that most people are unfamiliar with which usually bypasses a lot of security measures you have previously established in your release cycle. That is quite a bad habit concerning the production problems one has in this very moment.

Our goal is to make the fasted possible release process our standard process. Thus we drive our teams to deploy every single commit to production. That also means that every commit has to be shippable – with enough tests to make sure our product is still working. This is baking quality in already!

Screen Shot 2019-02-15 at 20.15.03.png
wikimedia.org

Still, things will break. But with a quick way to react and deploy a fix we do not even need rollback strategies any more. But being able to deploy a hot fix very quickly implies that you can also quickly analyse the root cause. But that is not always true. If you know what commit was faulty you can of course deploy a revert. But sometimes a new feature in it’s complexity across stories and commits is just not working right. Thus, we work a lot with feature toggles, making sure that all new functionalities are toggled off in production. We also make sure that we can toggle those features independent of deployments. Thus, we decouple the technical risk of a deploy with the business risk of a feature toggle. This reduces our needs for reverts by about 90% and most deployments run automatically and without problems. Every few days a new feature is toggled on. People are then usually more aware of the situation and monitor the apps behaviour more closely (at least they should). Problems can then be identified and either quickly be fixed with a tiny commit or, if you encounter major blockers, you toggle the feature off again.

In conclusion, we have way fewer way less troublesome releases, while we can activate new features in a very controlled way. Thus, we do not only deliver value fast, we also achieve higher quality at the same time. However, a very experienced and disciplined team is needed to work on such a high level of quality commit by commit.

How changes to your process increase the quality of your software.

When we have a close look at the process steps of a typical agile software delivery team, we will realize what is going on there.

Screen Shot 2019-02-07 at 10.01.57.png

In the first one, “in Analysis”, we plan to create value. Once we have a plan how to add value for a user to our product we but this story into “Ready for Dev”. And here we are doing literally nothing, or, in other words: wasting time. In the best case, the story is still good to play – in the best case! Often stories are a bit outdated already before they are picked up and if they were lying around a bit too long they may even be completely degraded. In the next column “in dev” we are actually adding the value to the product and in the next “Ready for… “ process step we are – you guessed it – wasting time too. Until someone can check for the planned value and finally deliver the value. If we find a bug while testing, the deliverable software is in the state of analysis all over again – the bug is prioritized against other features and maybe fixed and maybe not.

As QAs we have little influence over the “Ready for Dev” column, but the “Ready for QA” is ours. Removing this column can have some positive impact on the velocity and quality if you pair it with another tool: work in progress (WIP) limits.

The idea is the following: if there is no “Ready for QA” column there is no place for a developer to just drop a ticket/story before picking the next piece of work. Unless the story was put directly into “in QA” (without anyone actually working on it). If the team then agrees on WIP-Limits, one could argue that a single QA person can work on one or max two stories at the same time. Thus, if there are already two stories “in QA” another one cannot be added. Thus, the story cannot move, thus the devs cannot start a new one (the “in dev” column should also have a WIP). The best way is to help the QA do the job and thus everyone gets more involved into testing – a big win for quality.

This will decrease the time to market for new features drastically. As a plus, bug-fixes can be delivered faster, too. Even in the standard process.

Screen Shot 2019-02-07 at 10.07.02.png

We applied these measure on different projects in different context. In one case, we were able to decrease the cycle time (= time from “Analysis” to “Done”) for stories from 13 days to 4 days – without anyone having to work “harder”. The restrictive WIP limits have a very good effect: if you can’t have so many stories in “in dev” you do not have to perform any context switches. You focus on getting one thing done before the next. Surprise: that actually helps to get things done! And being more focused on one specific task leads to fewer mistakes, thus fewer defects.

In another example where we applied the same technique, we were able to increase our velocity by a bit more than 30% (!) without any impact on the quality of the software.

These are some techniques to truly bake chocolate into the muffin: reduce waste in order to increase your focus on the really important things. Spend your available time on the urgent matters and “suddenly” you end up with a higher quality product.

Dimensions of Quality

One year ago I introduced the “muffin concept” in a small blog post, “how we do Quality at ThoughtWorks”. Ever since then I have been to various Meetups and Conferences to discuss the idea and all the concepts behind it. After another year and dozens of discussion, there are more thoughts around how to bake quality in. It covers quite different aspects, thus I decided to split this post into a miniseries of 5 posts.

Enjoy the read and please give me feedback: tell me what you think!


People say that quality is like the chocolate on a muffin. Is it? Let’s say the product we build was indeed a muffin. The business analyst brought the recipes, and the developers baked it. Afterwards, the testers put chocolate on top.

If I imagine the muffin, it’s still a bit dull. The muffin is only perceived to be of really high quality if there are some chocolate chunks on top: like testing software.

10797887872_IMG_1353.jpg

The only problem is that – just like testing in the software delivery process – the chocolate is only “applied” after baking the major part of the product. It looks good and smells good. But does a muffin with a very few chocolate chunks only on top really taste better?

10797691648_IMG_1357.jpg

No, because there is no chocolate inside the muffin, just like testing does not improve the quality of a software:

When you test software you basically analyse a (hopefully) isolated system in a controlled environment. And no matter what you do, that system does not change. You may find behaviours in the system that are unexpected (which are the bugs / defects we are trying to find). But they were in the system already (before you started your test case) and they will be in there afterwards. No system under test does ever change its state (exception: quantum mechanics). Thus, the system does not evolve or improve (in quality) while you test it. Yet, another cycle of development in necessary to actually improve quality.

But that is quite sad. I am a quality analyst. An enthusiast. Caring about the quality of my product is my job description. Usually I am the team member most passionate about it. How can I be the only one who is not able to actually improve the quality?

With this mini-series of blog posts we want to investigate how we can be involved to improve quality in software early on – how to bake chocolate into the muffin!

10797458624_IMG_1362.jpg

Usually, a typical day in the life of a tester may look like this, where you pick a new build, deploy it to a test server, run smoke test and your extensive test suite. Possibly its (partly) automated. When no blockers are found one would monitor the production environment, ensure everything is healthy and announce & ship the build to production. Maybe you have a test suite running in production to ensure your delivery there:

Screen Shot 2019-02-15 at 20.21.34.png

However, that is only the last bit of a longer process. Normal, agile software delivery teams have a process that looks similar to the following one:

Screen Shot 2019-02-07 at 10.01.57.png

Each column is often reflected in tools like Mingle, Trello or Jira: “in analysis” is the step where Product Managers or Business Analysts work out the requirements for the projects. Once they are done they move into the next column. That could be a planning meeting where a sprint backlog is filled. We call the backlog “Ready for Dev” column. At some point devs pick up a story, works on it, finish it and put it into “Ready for QA” until a QA picks it up, works on it and ships it. Then a story is finally done.

If a defect is found in the QA work in the best case the ticket needs to go back to the devs or all the way back to in analysis. With these long feedback loops it can take a while until all kinks are out of a new piece of functionality.

Here we want to tighten the feedback loop and get involved earlier. Here is exactly the point where we can improve quality early on and where we can measure it. We identified four different fields where we are usually involved and where we have an actual impact on the quality of our product. You can read about each one of them in an individual (small) post:

  1. How changes to your process increase the quality of your product. (2 min)
  2. How to get involved earlier in the software development life cycle: be involved! (3 min)
  3. Joint forces of the analysts: improving the quality of software even before its built. (1min)
  4. How establishing a trustful error culture in your team gives you the final boost in quality. (2 min)

Those four points is our recipe to bake quality in: You add some chocolate early on by process improvements. Then we add some technical strawberries along with the right amount of cream in the business space. We finish it off with some colourful sugar toppings in the team culture and voila… we really bake quality in!

10798083424_IMG_1367.jpg

With this holistic approach, we also step beyond being pure “Quality Analysts”. We still analyze the quality of software. But we also specialize on so many more things that lead to a better product. Thus, we truly are Product Quality Specialists.

“ThoughtWorks Presents” – Meetup in Hamburg

On Tuesday, March 28th I was invited to Hamburg. ThoughtWorks has a long running series of Meetups in our Hamburg office. I had the honor to be the next person to present there.

For this occasion we talked about how to build a high quality product and what the differences and implications are to talking “only” about a high quality software. We discussed what ways we can think of to improve quality and measure quality and what we should be looking at other than business requirements and bugs in a software.

It was a very nice evening with lots of participants. We had many interesting questions that led to even more interesting discussions. I am looking forward to the next time already. Not only to see old colleges in Hamburg but to continue with all the talks and thoughts.

Link to slides (PDF, opens in new tab): building a high quality product blog