Let’s look at the definition of TDD:
requirements are turned into very specific test cases, then the code is improved so that the tests pass
This means that you write a test for the smallest possible use case, then write code to make it pass, and repeat. When you do this, much of the code and tests are thrown away over time.
Sure, it is obvious that you throw away a bunch of your work before it is “done”, but this also happens over a longer period.
Now, to what I believe is the spirit of your question:
“When should I feel I’m done testing when developing in TDD?”
Life is not about absolutes, it is about trade-offs and choices made by you along the way.
For example, I recently started in a new role (about a year ago), so I was obviously very new to the codebase I was working in. I began by making sure my code coverage was 100% because I didn’t know exactly what was critical.
I didn’t care if this took me 10x longer than other folks on my team, I needed it to be sure of my changes at the time, so that’s what I did.
Once I grew more familiar with the organization, the team, and the code, I changed those rules on the fly based on the situation.
I think this can apply to anyone grappling with answers to questions like this that really have no one-size-fits-all solution.
If you happen to be lucky enough to work in an organization that takes testing seriously and requires TDD and pairing (or mobbing) to get any and all work done, by all means, make sure you hit 100%. Most of us don’t live in that world, however.
Don’t consider code coverage percentage a measuring stick for if you’re done testing. You can see 100% line coverage and still have bad tests that let production bugs slip through - it happened to me recently. There are other tools to try and ensure good testing, but that’s for another question.
Thanks for reading!