I recently had a short twitter debate with @unclebobmartin of clean code fame about 100% code coverage. He is of course much smarter than I as well, is vastly more experienced and enjoys a well-earned reputation in the software community that I could only dream of. However, I felt I had a point. I disagree with the 100% dogma as far as Unit Tests go. In fact, I disagree with it strongly. I argue it is more important to unit test some of your code and ship it quickly and iterate - certainly in web development. That does not mean your application is not thoroughly tested by the QA Engineers through automated and manual testing on several environments before hitting production - we are talking about Unit Testing here.
In my experience it can be quite difficult to achieve 100% code coverage in the real world (remembering what a unit test actually is and keeping in mind when it stops becoming a unit test and starts being a system test - database, file system, network). Think of all the plumbing and infrastructure code involved, think of all the getters and setters, think of all the external interaction with databases etc. Developers, if they are tasked with 100% code coverage, will probably find a way to hit the mark. Will those tests be useful? Does 100% actually guarantee anything? Should the tests not focus on key behavior and key user scenarios at the function level? Unit tests should cover some of the code and a good company will have great test engineers who write great automated system tests and do great manual testing and a great company will have constant feedback loop with product owners demonstrating the software as it progresses.
Of all the code out there running some of the biggest sites and most popular applications in the world, does anybody really think they have 100% unit test code coverage? Of course there are some situations when perhaps 100% code coverage may make perfect sense such as extremely critical systems in finance or medicine. And of course it also can depend on the culture and size of the company - startups are going to focus less on code coverage and more on just shipping code and features whereas enterprises are going to focus on quality more.
But I question the engineering fixation on 100% and it's actual value and ROI. It is NOT something I would blindly follow or suggest for every situation.
P.S. I like Kent Beck's answer on Quora.
I also like Kent's answer. It's just that my experience tells me that the number of tests that makes me go fastest gives me 100% coverage. The reason for that is simple. If my tests are incomplete I am uncertain about the code. If I am uncertain, then I cannot safely refactor. If I cannot refactor the code will rot. The rot slows me down.
ReplyDeleteIt's also interesting to point out that 100% coverage doesn't give you confidence in and of itself. If you spec out your code's behavior and develop iteratively, you end up with 100% of what you expect your system to do, along with 100% lines covered. The 100% lines covered is a symptom of good coverage, but 100% line coverage doesn't mean your specs do what they should.
ReplyDeleteI think I see where you are both coming from. 100% coverage as a side-effect of good TDD. That's really interesting. My angle was that if you develop with TDD and meet your requirements but realised your coverage was "only" 70% then dont worry, just ship it. Don't go looking for that 30% coverage just to say you hit 100%.
ReplyDeleteWhen you use TDD and you end up with only 70% coverage, I would seriously investigate why your coverage is that low. Use coverage tools to see what you missed and why you missed it.
DeleteThat said, I never ever write a test with the sole purpose to increase coverage. Tests I add after the fact are tests for parts that I missed for whatever reason.
A test coverage of 100% should always be your goal. You will probably never reach it. I never did in any real system. There are always some parts you cannot get under test - mostly with integration code - and this is not a problem. I always aim to get 100% of possible test code, which in most cases comes down to 95-99% code coverage.
> In my experience it can be quite difficult to achieve 100% code coverage in the real world
ReplyDeleteAnd here is the whole thing about 100% code coverage.
1) If the code is testable, it is not hard to cover it 100%. So if it is difficult for you to 100% cover your code, it is a design problem.
2) Experience shows that code with 10% hard to cover (i.e poorly designed coz not testable) is also highly error prone! So it is very worth to refactor to cover nicely this 10% remaining, often it'll be an opportunity to find non-trivial bugs at an early stage :)
100% of what? TDD gives you 100% of your behaviour covered, otherwise you are writing code before you have a test, which isn't TDD.
ReplyDeleteI wouldn't expect that to be the same as 100% of lines of code, as some lines cannot be unit tested (but should have integration tests, for example).
100% still doesn't mean no bugs, although the bugs are more likely evidence of ambiguity in requirements rather than behaviour the developer didn't expect. But you write a test to expose the problem and it shouldn't ever come back.
Steve, this is exactly what I was trying to say - 100% coverage of behaviour, not necessarily 100% line coverage. I believe 100% unit test line coverage is in reality almost impossible (adhering to strict no file system, db or network interaction in unit testing).
ReplyDeleteGreat post Peter. I agree, "the engineering fixation on 100% and it's actual value and ROI": it may always be possible for Production code to contain bugs no matter how many unit tests are written; esp. when mocks are used in place of real world concrete objects.
ReplyDeleteThanks for informative post
ReplyDeletePackers and movers in Faridabad
Packers and movers in Gandhidham
Packers and movers in Gandhinagar
Packers and movers in Ghaziabad