Better, Faster, Stronger – Delivering High Quality Products

Did you know you can enable your team to build better software faster while having a stronger team culture? Too good to be true?
In recent years, agile has influenced early involvement of testing in the development cycle. With this more and more testers are testing new functionality as soon as a commit is pushed. Yet such teams still fail to deliver high quality software. Why? What is missing?
Working with various diverse teams across multiple projects, Finn realised that testing doesn’t actually improve software quality. It’s just a bar assuring a certain level of quality that already exists. In order to actually improve we must get involved into much more than simply testing and think about the product as a whole.
In this session, Finn will share specific examples of how engaging with the business, engineering, process optimisation as well as the entire cross-functional team can lead to significant improvements in the product’s quality. At the end of the talk, you will know how to start with a holistic approach to improving product quality throughout the entire software delivery lifecycle.

— this is a talk I am currently doing on various conferences. Some people asked me if I can share the slides, but the files are just a bit too large. So instead I decided to include a recording of the talk (with the slides) here, as some of them only play well with the presentation.

How establishing a trustful error culture in your team gives you the final boost in quality

As pointed out before: With continuous deployments the time from commit until live for any commit is the time you need until a bug can be fixed – at best! How can you push on this limit?

It usually requires that all people are aware of the code, the business, the infrastructure, the test and the pipeline. If each person has all this knowledge only then can you react quickly, without gathering the “right” people to ship a fix. In other words: you need a high performing and truly cross functional team. Let me emphasise this here once more: from our perspective there is a clear advantage for the quality of the product here: it is about reacting to (breaking) changes (even) faster.

What is the leverage you as quality advocate have? Talking about errors! Your team will do mistakes, this is good. We would be out of jobs if they did not. We are here to find them and to help the team to prevent them going forward. We are the experts on errors – in a way. So make sure that your team does not waste a mistake, but learns from it.

Screen Shot 2019-02-07 at 10.35.35.png

Idea for image originally from: http://ww2.kqed.org/mindshift

There are some mistakes that your team will be doing:

  1. Sloppy mistakes. This is the most common. There are probably a dozen of them in this very blog post. The human brain can  only concentrate for about 10-15 minutes, until it needs a mini break. If we skip that break, we tend to do small mistakes.
    A typical mitigation strategy for this is pairing of any two people. When one is not fully concentrated for a moment the other one most often is.
  2. Aha-Moment Mistakes. This is when you encounter a (small) new learning by doing a mistake and finding out about it. It may happen if you understood something by reading or even much more so while explaining something.
    We also mitigate this by pairing – ideally by different skilled developers. If a more senior person explains a lot to a more junior person both have a higher chance to do this error – and learn something from it right away.
  3. Stretch Mistakes. In this scenario, we are quite away that we step out of our comfort zone. But when you need to or want to try something new you have to at least try, even if you already know that it’s more error prone than business as usual.
    Our way around such requirements are so called “spikes”: small, time boxed stories that make sure we can trial-and-error in a save environment. Thus, the result of the spike can be a small prototype that is thrown away. A new service that got some basic things just working fine. Or a branch that is ready for an all-team-code review.
  4. There are also some high-stakes mistakes. Those are equally risky as the stretch-mistakes but usually there is much less to gain. Those are not worth the effort. Prevent your team from doing these.

There are probably many more ways to categorise errors. There are also many more mitigation strategies, i.e. in our case we have the safety nets of our test-driven development. But differentiating the different types from one another is something that is typically quite easy for analysts who mostly work in the field of quality.

By encouraging your team to do errors (in a safe environment) you automatically get the basics right for a great error culture. And as soon as the impact of an error is reduced people will be much less afraid of doing errors. And if you are less afraid at work you are usually more creative. If a broken built is then nothing bad or painful (while still an urgent matter!) you will have a more relaxed, trustworthy and creative atmosphere. And guess what – in the end a lot less error happen in such an environment. Another boost for Quality!

The last fine tuning is to make sure your team has fun. Fun? Really? Yes.
Just like the point before: in an environment where people like to come to work, where they are happy to communicate and interact, where it’s fun to get work done, people will also be more focused and more passionate about what they do. As a result, they will do less errors with smaller impact. The ultimate boost for Quality!

Dimensions of Quality

One year ago I introduced the “muffin concept” in a small blog post, “how we do Quality at ThoughtWorks”. Ever since then I have been to various Meetups and Conferences to discuss the idea and all the concepts behind it. After another year and dozens of discussion, there are more thoughts around how to bake quality in. It covers quite different aspects, thus I decided to split this post into a miniseries of 5 posts.

Enjoy the read and please give me feedback: tell me what you think!


People say that quality is like the chocolate on a muffin. Is it? Let’s say the product we build was indeed a muffin. The business analyst brought the recipes, and the developers baked it. Afterwards, the testers put chocolate on top.

If I imagine the muffin, it’s still a bit dull. The muffin is only perceived to be of really high quality if there are some chocolate chunks on top: like testing software.

10797887872_IMG_1353.jpg

The only problem is that – just like testing in the software delivery process – the chocolate is only “applied” after baking the major part of the product. It looks good and smells good. But does a muffin with a very few chocolate chunks only on top really taste better?

10797691648_IMG_1357.jpg

No, because there is no chocolate inside the muffin, just like testing does not improve the quality of a software:

When you test software you basically analyse a (hopefully) isolated system in a controlled environment. And no matter what you do, that system does not change. You may find behaviours in the system that are unexpected (which are the bugs / defects we are trying to find). But they were in the system already (before you started your test case) and they will be in there afterwards. No system under test does ever change its state (exception: quantum mechanics). Thus, the system does not evolve or improve (in quality) while you test it. Yet, another cycle of development in necessary to actually improve quality.

But that is quite sad. I am a quality analyst. An enthusiast. Caring about the quality of my product is my job description. Usually I am the team member most passionate about it. How can I be the only one who is not able to actually improve the quality?

With this mini-series of blog posts we want to investigate how we can be involved to improve quality in software early on – how to bake chocolate into the muffin!

10797458624_IMG_1362.jpg

Usually, a typical day in the life of a tester may look like this, where you pick a new build, deploy it to a test server, run smoke test and your extensive test suite. Possibly its (partly) automated. When no blockers are found one would monitor the production environment, ensure everything is healthy and announce & ship the build to production. Maybe you have a test suite running in production to ensure your delivery there:

Screen Shot 2019-02-15 at 20.21.34.png

However, that is only the last bit of a longer process. Normal, agile software delivery teams have a process that looks similar to the following one:

Screen Shot 2019-02-07 at 10.01.57.png

Each column is often reflected in tools like Mingle, Trello or Jira: “in analysis” is the step where Product Managers or Business Analysts work out the requirements for the projects. Once they are done they move into the next column. That could be a planning meeting where a sprint backlog is filled. We call the backlog “Ready for Dev” column. At some point devs pick up a story, works on it, finish it and put it into “Ready for QA” until a QA picks it up, works on it and ships it. Then a story is finally done.

If a defect is found in the QA work in the best case the ticket needs to go back to the devs or all the way back to in analysis. With these long feedback loops it can take a while until all kinks are out of a new piece of functionality.

Here we want to tighten the feedback loop and get involved earlier. Here is exactly the point where we can improve quality early on and where we can measure it. We identified four different fields where we are usually involved and where we have an actual impact on the quality of our product. You can read about each one of them in an individual (small) post:

  1. How changes to your process increase the quality of your product. (2 min)
  2. How to get involved earlier in the software development life cycle: be involved! (3 min)
  3. Joint forces of the analysts: improving the quality of software even before its built. (1min)
  4. How establishing a trustful error culture in your team gives you the final boost in quality. (2 min)

Those four points is our recipe to bake quality in: You add some chocolate early on by process improvements. Then we add some technical strawberries along with the right amount of cream in the business space. We finish it off with some colourful sugar toppings in the team culture and voila… we really bake quality in!

10798083424_IMG_1367.jpg

With this holistic approach, we also step beyond being pure “Quality Analysts”. We still analyze the quality of software. But we also specialize on so many more things that lead to a better product. Thus, we truly are Product Quality Specialists.

“ThoughtWorks Presents” – Meetup in Hamburg

On Tuesday, March 28th I was invited to Hamburg. ThoughtWorks has a long running series of Meetups in our Hamburg office. I had the honor to be the next person to present there.

For this occasion we talked about how to build a high quality product and what the differences and implications are to talking “only” about a high quality software. We discussed what ways we can think of to improve quality and measure quality and what we should be looking at other than business requirements and bugs in a software.

It was a very nice evening with lots of participants. We had many interesting questions that led to even more interesting discussions. I am looking forward to the next time already. Not only to see old colleges in Hamburg but to continue with all the talks and thoughts.

Link to slides (PDF, opens in new tab): building a high quality product blog

Pure Performance

Episode 21: How ThoughtWorks helped Otto.de transform into a real DevOps Culture

Finn Lorbeer (@finnlorbeer) is a quality enthusiast working for Thoughtworks Germany. I met Finn earlier this year at the German Testing Days where he presented the transformation story at Otto.de. He helped transform one of their 14 “line of business” teams by changing the way QA was seen by the organization. Instead of a WALL between Dev and Ops the teams started to work as a real DevOps team. Further architectural and organizational changes ultimately allowed them to increase deployment speed from 2-3 per week to up to 200 per week for the best performing teams.


Episode 22: Latest trends in Software Feature Development: A/B Tests, Canary Releases, Feedback Loops

In Part II with Finn Lorbeer (@finnlorbeer) from Thoughtworks we discuss some of the new approaches when implementing new software features. How can we build the right thing the right way for our end users?
Feature development should start with UX wireframes to get feedback from end users before writing a single line of code. Feature teams then need to define and implement feedback loops to understand how features operate and are used in production. We also discuss the power of A/B testing and canary releases as it allows teams to “experiment” on new ideas and thanks to close feedback loops will quickly learn on how end users are accepting it.