Lessons Learned in Software Testing

Cem Kaner, James Bach, Bret Pettichord (2001)
Review date: March, 2012
Summary

This book has eleven chapters. Each includes from seven to roughly fifty lessons. Since some chapters contain many lessons, it's quite difficult to create an accurate summary, but here's an attempt:

Chapter one, "The Role of the Tester", places the tester in a larger context. What a tester does, why, whom he/she interacts with, and what he/she should not do.

Chapter two, "Thinking like a Tester", is about deeper mechanisms underlying the tester's work. It starts with the lesson "Testing is applied epistemology", and takes it from there. It's also about thinking and exploring. An unusually deep chapter for this kind of book.

Chapter 3, "Testing Techniques", contains just that. It's not composed of too many lessons, but each lesson is several pages and includes lots of terminology and techniques that are not found everywhere.

Chapter 4, "Bug Advocacy", is about a core topic: bugs. How to report them, which ones to fight over, how to make sure that they get the attention they need, and how to get them fixed.

Chapter 5, "Automating Testing", treats many aspects of test automation. What to aim for, when to automate, when not, and how to involve the programmers.

Chapter 6, "Documenting Testing", talks about IEEE Standard 829, templates and communication. This chapter doesn't contain too many lessons.

Chapter 7, "Interacting with programmers" is really about communication and common sense.

Chapter 8, "Managing the Testing Project", is a catch-all chapter containing things like project culture, readiness for change, adaption of practices, and how to decide that enough testing has been performed. Actually it contains a lot more.

Chapter 9, "Managing the Testing Group", is devoted to the people dimension. How to treat colleagues and how to hire, but it's also about some fundamental traits like integrity and diversity.

Chapter 10, "Your Career in Software Testing", is slightly local to the American job market, but does give everyone some career hints, like: "build a portfolio" and "Improve your public speaking skills".

Chapter 11, "Planning the Testing Strategy", wraps everything up by giving a few hints about how to approach the overall testing strategy.

Opinion

I really, really liked this book! At first I was a bit skeptical about the format - 293 lessons in roughly 250 pages, but I soon bought the concept. In spirit, this book is quite similar to Facts and Fallacies of Software Engineering and The Art of Agile Development. All these books are packed with tiny self-contained packages of knowledge. I was also surprised about how up-to-date this book seems, despite the fact that it has been written ten years ago.

I always make notes when reading a book, so that I have something to base my review on. For this book, there were many notes. I might as well just list my favorites in lesson order.

The first cool things were the lessons that urge the reader to study epistemology, psychology and systems thinking (models in particular). It's said that there are many good testers out there who didn't study these things, but that the reader should aim for better than good. What I liked about these statements is the fact that they point the reader in the direction of another body of knowledge. Many testing books talk about techniques, tools, and processes, but don't direct the reader towards broader knowledge.

There's a great lesson that seems to be based on some older literature that says that requirements are studied through conference (talking with others), inference (extrapolation), and reference (discovery). This seems sort of obvious, but then again, there's a lot of literature out there that assumes that requirements just are. Anyway, I have a thing for books that find good phrasings about elusive concepts. Another good lesson to be found is the one on bias. Of course testers get biased, and here we get eight examples of different types of bias.

All of the lessons above come from chapters one and two, Then there's chapter three, in which the 50:th lesson resides. It's one of the best summaries of testing techniques I've ever read! The rest of the chapter is also very good. It summarizes the all-pairs technique, some quality attributes, and lists some problem drivers. It's a lot of testing theory in few pages.

Speaking of bias, of all lessons worth reading in chapter four, I decided to pay the most attention to lesson 72: "Minor bugs are worth reporting and fixing". The argument goes like this (slightly rephrased): if you stop caring about minor bug, then eventually you will stop caring about bigger bugs. This will lead to the system's death.

The whole chapter on test automation is good. The writers take a very realistic stance: automated tests don't find too many bugs, capture/replay doesn't work, automation tools have bugs, and also give practical advice about how to succeed and how to make best use of automation. Like chapter three, this chapter contains a lot of information. All good stuff. A lesson that stood out was that on testability: "Testability is visibility and control". The authors claim that the following features increase the ability to observe and control: access to the source code, logging, diagnostics, event triggers, and permitting multiple instances (the list is longer). Anyway, that's not how I define testability, but it's an interesting way of slicing the problem.

In the next chapter, there are many good pointers as well. The waterfall and the V-model are discussed from a tester's perspective and the obvious conclusion (that testers should be involved from the start of the project) is drawn. We also learn about what activities testers should engage in during the early phases of a development project. In fact, we can spot some tendencies towards "agile testing" here. The funny thing is that the age of the book results in phrases like "seriously consider using Extreme Programming methods to develop automated tests".

Another cool thing in the same chapter is the use of a balanced scorecard to capture the state of the testing process from various angles. Balanced scorecards were very hyped around 2000, so apparently they made it to software development too, but it's still a cool idea.

Towards the end of the book, there are some lessons about licensing and testing strategies that are worth reading too.

This hasn't been much of a review so far. I've just picked out some sections that I think deserve more attention. But again, this is a very good book. When reading, I stopped and reflected on how each of these lessons related to my own experience. This is the best kind of reading; arranging new information along with what you already know.

If I have to criticize the book, I can say that some of the chapters aren't really lessons. They are theory sections or flat assertions, but it doesn't really change anything.

Who should read this book

Testers who want some advice from the context-driven school are the primary audience of this book, but project managers/scrum masters, and even developers won't waste their time reading it.




News

  • 2015-09-29

    It's been almost one and a half year since I reviwed a book! I've been too absorbed by Writing my own. Anyway, I'm back with Jeff Patton's relatively...
  • 2014-01-04

    New category! Performance! Reviewed The Every Computer Performance Book. Check it out!
  • 2013-09-10

    Reviewed a book that' slightly less technical, but much more fun to read. It's I.T. Confidential.
  • 2013-08-13

    Reviewed yet another book on Visual Studio 2012 and TFS. I also created a "Microsoft" category and moved the other TFS book there from the "Tools"...
  • 2013-08-05

    Updated the FAQ. Included information about getting a book reviewed.