Test,  Test driven development,  Unit test

TEST DRIVEN DEVELOPMENT

History

We’ve all heard of waterfall development, you know, first we make plan, then we do the architecture, when that is done, we start coding, after that we test and fix bugs, and lastly, we deliver. And then the clients are not happy with what they got 🙂

The problem with this is that the users will have to wait until the entire project is finished before they can see if it is what they wanted. And the same goes for us developers. We have to wait until we’re finished, then the testers are finished, before we are told if there are any bugs. And even longer before the users get a say in the matter.

That is way too long turnover time.

And to put insult to the matter, when the testers do the testing, we can’t sit idle, so we’re assigned new projects to develop. When the bugs arrive, we have to interrupt that to fix them. And then we have to try to remember what the heck we were doing when we were interrupted.

That is inefficient.

It’s obvious that we want, no we need, feedback sooner.

So what can we do? Well, we can break down the project in smaller pieces. But we still do it the waterfall way, so while we might have gained some time, the time from code complete to testing done is still too long. And we still get interrupted.

Then Kent Beck sat down to think. How can we make the feedback come sooner? What can we do to get it the quickest we can?

And the answer he found was Test Driven Development.

==>  If we define what should be done and write tests to verify that before we write the code, we will get the feedback as soon as the code is written. No delay at all. Can’t be quicker than that!

What? Writing tests before we have anything to test?

What do we mean with writing tests before the code? Can we actually test something that doesn’t exist?

==>  Yes, we can, because we test behaviour, not code.

Testing behaviour has actually nothing to do with the code at all. What we validate is that we get the correct result from the function. How it does it is of no concern to us. For all it matters, it could be a person sitting behind the interface responding manually to our requests.

Given that, writing the test starts before you even write one single line of code, neither test nor production. Start with defining the behaviour. You have to know what you want the code to do. You have to have a model in your head of what you are trying to accomplish.

This is actually one of the benefits of this technique. You will design the code before you write it, thus separating design and coding. Concentrate on the one instead of juggling both at the same time. That will result in you finding flaws in your thinking early, things that you otherwise wouldn’t have discovered until after you created the code. And then you would have to rewrite it. Now you don’t.

How do I actually do TDD then?

Start by writing a call to the function that you want to test. It doesn’t exist. The test fails. Yes, that’s right, not compiling is also a failure. Now you have written the least amount of code that makes the test fail. Now, switch over to writing code, and just enough code to make the test pass (Keep It Simple, Stupid, or KISS). Create an empty function. Now the test can compile, and you’re done with the first iteration.

Then pick out one part of the behaviours. In a function that sums two values, it could be that adding two values produce the correct sum. Just use positive values. Leave adding zeros, negative values, when the result is an overflow and all the rest to the other tests.

Write a test that verifies that this happens. Again, write only as little test-code ass possible to make the test pass. And remember, we don’t test code, we test behaviour, we don’t have to know how the function does it. And right now, there is no code, so we can’t know. But we still know if the function returns the correct answer or not. So we test that. Run the tests, and of course, the new test fails. Yes, actually do run the tests, even if you know it will fail.

Have you ever heard someone say: “Start with writing a failing test”? This is exactly what they mean.

Now, if the test doesn’t fail before the code is written, your test has a serious bug. I mean, how can a function that doesn’t return anything actually return the correct result? Which is cool, you have an automatic test of the test, you now have confidence that when the test goes green, it is because it does thing correct, and only because it does things correct.

Now is the time for implementation again, now you can write the code for that behaviour. But only for that one thing. Do not add one single character that has anything with the other behaviours to do (if you can avoid it). And only write the absolute minimum code that makes it pass. KISS again.

And as soon as you are done, run the test. It failed before. Now it doesn’t. Presto. You have just written code in a test-driven way. Or it fails, you now know that you have a bug. Either way, you got the feedback you wanted and as soon as you could get it.

Now iterate this. Find the next behaviour, write the test, write the code, run the tests. Yes, all of them, you might have broken something when you added the last bit of code.

And remember, don’t go for the gold! Avoid the end behaviour of the function. If that means you write silly code, then do it. Let the complexity grow as your test suite grows. Take unnecessary small steps. That will give you better test coverage but more importantly, it will reveal things about the code that you otherwise would have hidden in the complexity.

There is also another advantage with doing TDD. By writing the test when you write the code, it makes the tests a part of the creative process, rather than a chore you have to do afterwards. You will probably do a much better job this way. Ask me how I know…

Refactor

There is one more step in the loop: refactoring. It’s like the “Scout Rule”, you know, always leave the campground, or in this case, the code, a little cleaner than you found it.

This is an important step. Do not skip it! Always, yes always, refactor after each test. Make the code just a little more readable, better, simpler. By refactoring after each test, each refactoring will be really small and fast. It will not be perceived as a difficult thing. It’s just something you do. Instead of refactoring everything at the end. Again, not a chore if you include it in the creative process.

Refactoring this way will reveal patterns. You will see the code kind of self-organise into groups, functions and new classes, right before your eyes.

And you know that you are safe when you do that. You have tests to prove it. The test doesn’t care what the code looks like, it only cares about the results it produces.

Now your code is also the most readable code you can produce. That also makes is the easiest code to maintain. It will always be that, the process ensures it. And that will actually make you more efficient. The better the code, the quicker you find bugs, understand what to change and so on. So putting in a little extra time now will save you a lot of time later.

We often talk about Read – Green – Refactor in TDD. It is the loop you go through in each iteration, for each test you write.
Read = Write a failing test
Green = Write code to make the test pass
Refactor = Make the code cleaner

What about writing tests after the code is done?

Sometimes you can’t write the tests before, sometimes you don’t even know what the code should do. Then you have to experiment, write prototypes. The way we all used to code before the test-driven approach.

There’s no way that you can write the tests beforehand in this case. And that’s OK. Write them after instead. But remember that this is a tool to figure out difficult stuff. Not how you usually want to write it. You do lose all benefits of the test-driven approach. You have to do all refactoring later, and the deadline will probably stop you from doing that. You will probably miss a few tests because you have to find all behaviours after they have been done. You have to write all tests and do the refactoring at once, and then it becomes a burden instead of a part of the creative process.

Finally

All this has been summed up in three “rules” by wise people. I hope all my ramblings has made you think they make sense.

  1. You are not allowed to write any production code unless it is to make a failing unit test pass.
  2. You are not allowed to write any more of a unit test than is sufficient to fail; and compilation failures are failures.
  3. You are not allowed to write any more production code than is sufficient to pass the one failing unit test.

Each test you write makes the test suite more restrained, more precise. But every line you add to the production code makes it more general.

It is a bit different way to think about development and it will take some getting used to. But if you don’t give up, if you stick with it, it will become easier and give you better code. It takes time to learn TDD. Oh, and do not start in production at work. Practice it before. Or you’ll fail, and never do it again.

— Cheers!!

Like it? Share it!

Leave a Reply

Your email address will not be published. Required fields are marked *