- Write a test before you even write the class or method in the class under test
- Run the tests, test fails
- Do enough, and only enough, to make the test pass
- Run the tests, test pass
- Write more tests and continue to improve the design of the code
Not one developer I have ever worked with in any team in 10 years truly followed TDD as prescribed. I tried it for a few months and while it never slowed me down too much (a common complaint from people who have never attempted it) I felt it resulted in absolutely no benefit. I also felt I was doing something wrong when I wrote code without a test, I gave out to myself, deleted the code and started again. I write pretty good code I think and I know what bad code looks like.
Don't get me wrong. I write tests and almost every developer in every team I have worked in wrote unit tests. We typically did this as we wrote the code, probably towards the end after writing some feature and verifying it through a browser manually. The unit tests would check expected output and some boundary conditions not likely to show up in a quick check in the browser. All good? Code review, check-in, verify CI build runs and all tests pass. The key is that "all tests pass" includes unit tests and integration tests.
I never experienced the promised enlightenment of better design through TDD. Honestly, I did not. I like unit tests and well-written tests that run all the time help me refactor later and have spotted bugs being introduced very early. But writing tests first, before code and only in baby-steps seems (and always has seemed to me) to be overkill. It could be a useful training exercise for mentoring junior developers but for experienced developers and teams there is likely to be more value in writing code, adding some unit tests for sure, building out a continuous integration environment, having integration tests using servers, VMs, databases, browsers and whatever else and running integration tests anytime code is committed, checking user-focused behaviour.
One thing I have realised is that having a Test Engineer building automated integration tests can be frustrating while development is churning and APIs might be changing. So we came up with a rule recently in a team I worked in - if a developer writes some code that breaks the automation tests (developers can run these locally before committing, takes about 1 minute for 200+ tests), the dev fixes the integration test and sends a pull request to the Test Engineer. This frees the Test Engineer to work on new integration tests and maintaining that test and integration environment. We also have tasks that generally go no longer than 2 days so a developer will code away for no more than 2 days and then add some tests, check the integration tests, then commit and move on. Not perfect but works great, ship regularly and we have very low bug count. People over process, team figuring out what works best for the team. That is the true spirit of agile right?