Test Drive Development


Test Driven Development (TDD) is surprisingly not just a testing technique. It is more a style of programming than testing. It is a software development process dating back to 2003. TDD is characterized by short development cycles. This timing fits well with Agile development goals.


Here is the TDD cycle: A requirement becomes a test case. A test is added to the test suite. All tests are then run. The newly added test should fail, as no code for it has been written yet. The developer proceeds to code the minimal amount of code to pass the new test. All tests are run again. Finally, the code base is then refactored.

The nature of TDD leads to simpler designs. You are always adding only the bare minimum code needed to pass a narrow test. Units/modules developed by TDD tend to be small. The code to implement the test cases is treated as a first-class citizen in TDD: test code is just as important as production code.

TDD tests should be independent. They should not depend on prior test cases. TDD will result in more tests being written. The benefit is that less will be time spent debugging code. Defect rates should decrease. Code coverage will be high (at least 80%).

Since TDD tests must be independent, they will need to make use of “test doubles” which substitute the behavior of the rest of the system. There are four types of test doubles, differing in how deeply they model the parts of the system they are abstracting away:

1. A dummy – only default values are returned
2. A stub – simple logic is employed
3. A mock – produces values for a specific test
4. A simulator –implements complex logic.


TDD forces you to think through requirements and design before any code is written. It adds confidence to refactoring due to the added test coverage. The goal of TDD is to write clean code that works. While TDD may not produce a full regression test set, it will force you to have significant test coverage.