Should I go with TDD, when the deadline is tight?

Yeah sir, just wanted to know how bad it could be if I don’t follow TDD approach for bigger projects.
Thanks :smile:

Thanks for the gist sir, I got your points.:slight_smile:

1 Like

This.

I have a bit of a reputation in some circles (probably well-deserved :wink: ) for being a testing zealot from years back. These days I take a more moderate stance.

I believe in testing, but I’ve seen it both help and harm a project depending on how you wield the knife. Every test you leave in your suite is a promise to honor the interface (or ignore complaints from your automation system, which is a whole 'nother danger) so use them judiciously. Sometimes today I write tests as a harness for development then remove 50-80% of them when I’m done because they don’t add nearly enough value for the fixed point they represent.

Well designed context boundaries are a sweet spot for your test suite—these are the interfaces the rest of your system should be using, and they are supposed to be relatively fixed. That allows you to refactor ruthlessly within the boundary while trusting that your client code elsewhere will still work. If you only write a few tests, start here. They’ll save you time and headaches down the road for a minimal cost.

3 Likes

The effective approach is the point. The TDD link in the original question points to the test first interpretation. I do not read much in a vague text like “writing test and design code pieces could be time taking”. But if I let my fantasy take me in some predictable ways I see understandable rage towards the Uncle Bob movement: the (uneffective an paternalistic) TDD fundamentalists. Collegues, job interviewers and managers with obsessions, be it red green refactor, the vim or vi editor, the apple sign or a batterymark for my part.

2 Likes

plus

1 Like

Robert C. Martin’s talks were mostly targeted towards an audience that needed convincing that it makes sense to write code for testing. He probably should have made it more clear that there is much more to effective testing than what he proposed - which is just the starting point, not the destination.

The issue with the TDD antagonism is that it is often used to let the pendulum swing too far the other way, serving as an excuse to bypass necessary testing effort, usually as a means to achieve a deceptively short “time-to-initial-success”.

Coplien’s Why Most Unit Testing is Waste doesn’t propose to stop testing. And the reference to “Waste” is a bit clickbait-ty - as he acknowledges that a lot of test code simply has a “best before date” - i.e. it is up to the developer to recognize when tests are no longer helpful and take responsibility to discard them.

Ian Cooper puts it more succintly:
Test the behaviour of the “public API” - not the implementation details; and tests being used to discover a suitable implementation need to be deleted.

be it red green refactor

The issue is training which oversimplifies it to “red green”, skipping the refactor step entirely.

Chad Fowler’s “tests are a design smell” is based on the observation that good tests can only exist if you have

  • optimal boundaries with APIs that do not expose implementation details
  • tests that verify the operational protocol against that API/boundary and not the implementation details inside the boundary.

The elephant in the room is the effective tests require optimal boundaries, boundaries which may take some time to discover. Given that it is often inconvenient to manage and minimized dependencies, appropriate boundaries are often not made a priority, leaving tests to be coupled to implementation details which can significantly add to the maintenance burden.

6 Likes

Yep, I guess there are swings and roundabouts. Will testing lead to a less brittle app? Will too many tests hinder development? These are the sort of questions we need to ask ourselves.

Personally, If the app is going to be large, I would test - but probably fairly lightly and maybe more for core/important features.

1 Like

Depends on team size and codebase complexity. Now if you actually write test first or after I don’t think will matter much but if code base is complex and team is large having good tests really helps.

Robert C. Martin’s talks were mostly targeted towards an audience that needed convincing

Whenever I get the impression a speaker has a need to convince me of his ideas I feel an urge to leave the room. I do not accept to be treated that way.

Coplien’s Why Most Unit Testing is Waste doesn’t propose to stop testing. 

Coplien doesn’t propose to stop testing, neither does aadii10, neither do I. You’re providing a straw man argument.
Moreover you are talking about testing, the op specifically asked about tdd in the narrow sense (test first, red green refactor). Tdd and unit tests are not the same.

Chad Fowler’s “tests are a design smell” is based on the observation that good tests 
can only exist if [..]

I do not agree with that interpretation, you do not do him justice. This is what he actually says for the record: https://youtu.be/qH_y45he4-o?t=2603 and in http://picostitch.com/blog/2014/01/trash-your-servers-and-burn-your-code-immutable-infrastructure-and-disposable-components/ he says clearly: “Tests are also a design smell. If you find yourself more time in your tests, and I don’t mean in the design of your system.”
One of his statements in the youtube link: don’t let your testsuite become an anchor. That anchor is the reason for

skipping the refactor step entirely

Here he says “do not write unit tests, they are a design smell” https://youtu.be/-UKEPd2ipEk?t=1586 . He also says he finds tdd a productive way of writing software - something that I do not buy and deem contradictive to his other utterings. Again: Tests are “a piss-poor way of trying to specify something.”. (Do we need agile software development?)

1 Like

I would like to add to this discussion that I have a feeling that a lot of the quoted/cited statements are interpreted as a rigid ‘gospel law’ rather than a guideline whose exact rigidity and details depend on where it is used.

Dont make choices based on what someone else said about the situation they were in, but based on a comparison you make between the situation they were in and the situation you are in.

Personally, I do not write tests during a hackathon with a strict deadline. I do not write tests when creating a Proof-of-Concept. Tests restrict/slow down redesigning of the system which in these cases is vitally important to be able to quickly do.
I do write tests once a system is approaching a presumed ‘stable’ state, mostly to prevent regressions in the future. A certain dose of regression tests are also really good to keep your code from breaking functionality when working together with multiple people.

I do write feature/integration tests when requiring to follow a strict set of specifications/requirements (c. f. gherkin, it is super cool ); mostly to be able to agree with the other party about the details of the specification exactly.

I write unit tests mostly as doctests to explain, in a library or business logic code, to other devs (and, importantly, myself) how stuff works and fits together. This is also the main place where I sometimes work test-first, when I already know how I want something to look but have yet to write the implementation.

I write rigorous tests and possibly type-proofs if (and only if) writing important systems that are vitally important (like when a crash would be life-threatening or cost thousands of dollars) or when the system will be impossible to change in the future (like when writing Smart Contracts).

There is no silver bullet in choosing a testing methodology. It really depends on what you are doing. To answer the original question: I would probably not do Test-first development when there are strict deadlines,( but maybe I would based on other requirements of the project).

Oh, and as a side note: avoiding strict deadlines will probably improve the quality of your applications. Of course, educating managers is hard :stuck_out_tongue:.

8 Likes

I always do unit test. It is not always TDD, sometimes it is code before tests if I know how to implement a module. After implementing module you always should check your work. It can be manually terminal based testing or unit testing.
Couple days go my colleague did one project’s issue. Of course without TDD or any tests. And know what? He spend double time: testing in console and in unit tests after implementation.

1 Like

If you write tests rather quickly, I think in the long run it will actually save you time because when you implement a new feature or change a few lines of code, testing shows immediately all the errors (if the test coverage is good) that arose from those changes. Testing manually, you have to check all actions for yourself and you can miss some test cases

1 Like

I wrote many projects, never a single test.

2 Likes

A risc to be aware of while practising tdd is giving up on cohesion while decoupling to get more testable units:
http://david.heinemeierhansson.com/2014/test-induced-design-damage.html (link was already provided in BDD / TDD criticized)

2 Likes

Some devs especially the ones who do TDD, may think that tests are also the software requirements/specs. There are rspec and cucumber, their communities are some of the most prominent TDD supporters. I still remember that Michael Hartl originally endorse TDD with Rspec in his astoundingly popular Ruby on Rails book. That book is read by so many Ruby on Rails developers, thus began the era of TDD in Ruby on Rails, until DHH himself denounce it, and Michael Hartl would later erased TDD from his book.

TDD actually tries to break the traditional mindset of old Waterfall methodology, where requirements are defined as a whole in the beginning. Then you do the coding phase. And then you test that code in accordance to what required (in the requirements) in the first place.

TDD was born inside the Agile mindset. There are 4 points of Agile Manifesto:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

Individuals and interactions are embraced by pair programming and customer collaboration. Working software is embraced by TDD. Customer collaboration is embraced by shorter periods but more frequent of continuous delivery, TDD tries to fill in this point as well, especially with something like cucumber. Responding to change is also embraced by TDD with its red green refactor.

Agile also state: Working software is the primary measure of progress. In order to achieve this, TDD supporter promotes TDD as a help tool, because TDD can helpfully assist devs on ensuring green state on every delivery.

Some extreme TDD practice writing the specs/cucumber during conversation with customers/business people. Some other practice TDD with pair programming, one dev do the spec, the other write the code.

There is also a tool in Rails that will automatically run the tests after the code has been changed to help pair programming quite easier (But Uncle Bob himself disagree with this).

So there you go, the origin of TDD. Sounds reasonable.

Do I practice it? Nope. So many times, for me, some principles from the old Waterfall just works. Plans, write requirements, designs and think ahead before I write the code carefully. Test accordingly to those plans and requirements.

Plans changed? Go back to the first phase of planning before writing another code.

6 Likes

That is your prerogative. However that doesn’t change the fact that many organizations, especially smaller ones, proclaimed we don’t have time for testing before Extreme Programming coined the term Test-Driven Design. Testing was often delayed to user acceptance testing at which point in time defects were layered upon defects. Fixing defects later usually costs more time and money and that is waste.

People rarely have the discipline to go back to build tests around code they already believe to be working (and verify that the test will in fact detect a defect) - because they could be writing more code for production instead.

So when I read

time to deliver my project is less as writing test and design code pieces could be time taking.

What I see:

  • We barely have enough time to develop the product if even everything goes according to plan and every decision made is the right one.
  • We need to start generating “product code” immediately so there is little time “think about it” (i.e. design).
  • We have no time to write code for tests as we barely have enough time to code the (right) product.

which is a recipe for disaster. So yes, I actually believe that the opening post is seeking to legitimize minimizing and potentially completely eliminating any automated testing effort - it’s not just about TDD.

For me TDD is about product code always having automated, executable tests that documents it’s behaviour, creating the opportunity to refactor without hesitation. I personally don’t care if the tests are written first - but they have to be red first. And in my opinion heavy use of sophisticated mocking libraries is a smell. Your tests are telling you:

  • Your code structure is suboptimal
  • Your boundary is in the wrong place
  • Implementation details are leaking through the boundary
  • The test is in the wrong place

In reference to Chad Fowler: “Tests are a design smell” is meant to be a provocative statement towards a community which glorifies 100% test coverage.

Don’t let your tests be an anchor and maybe it’s more important to monitor the runtime behaviour of your code than it is to test it.

i.e. it’s about balance:

  • static type checking doesn’t replace testing
  • tests can’t replace runtime monitoring
  • some scenarios are too costly to test - provided the manifestation of a defect in production is largely inconsequential, will be quickly noticed and will be quickly addressed and rectified.

Also he talks about ‘Code “this big”’, i.e. code that is small enough to be replaced wholesale. After a rewrite, how do you know it is “mostly safe” to deploy the rewritten code to production? The approach strongly suggests the existence of a test harness emulating the real operating environment that can run tests (scenarios) against the rewritten code. I.e. the tests only verify the correct behaviour of the “component” and are isolated via the constraints imposed by the operating environment.

I can understand the cynicism that you project towards certain consultants and consultancies that market TDD/BDD as some kind of easy street cure all (once again focusing more on process than intent) but one shouldn’t “throw the baby out with the bathwater” and more importantly not let it become an excuse to cut back on essential testing activities (not implying that you were suggesting that - but in this topic I think there is a very real danger of that interpretation).

Testing isn’t easy and good testing doesn’t “come naturally”.

I think it is important to remember that “novices” want hard and fast rules because they’re easy. “It depends” while usually appropriate isn’t exactly helpful from a novice’s perspective. Dogma via “Development by Slogan” be it DRY, TDD/BDD, etc. can be a real problem.

Tests restrict/slow down redesigning of the system which in these cases is vitally important to be able to quickly do.

This one is a bit of a slippery slope - we’ve all done it. But again context determines the risks we are taking in doing so. In some circumstances the risks are low enough but in others designing your tests (scenarios) is designing your software.

(c. f. gherkin, it is super cool )

For me the Cucumber thing seems to be moving back into the tool-addiction of Rational ClearCase/Big Design Up Front - so if I was looking for waste I would start right there.

I would probably not do Test-first development when there are strict deadlines

Statements like this are dangerous because they will be construed by outsiders and novices that TDD in particular and testing in general is inefficient and expendable/optional. It is intuitive that you will save time now if you don’t create the test right now. It is counterintuitive how much more expensive many defects will get the longer they remain in the codebase.

That is likely a reflection of the size of the projects. The flip side is that while it may feel like overkill to use testing tools on smaller projects, it’s the perfect time to become accustomed with them before you embark on a larger project. Tests are also a feedback mechanism - if it’s hard to write a test then there may be a problem with your design. The worst thing is if you have to introduce testing after the fact, especially if there was no other incentive to decouple in all the right places.

As usual, opinions vary.
Is TDD Dead?
Test-Induced Design Damage. Fallacy or Reality?

Managing the Development of Large Software Systems (1970) i.e. “the Waterfall Paper”. Figure 2 shows “ideal waterfall” - figures 3 and 4 acknowledge the realities, i.e. iterations will happen - which is acknowledged 16 years later in A Spiral Model of Software Development Enhancement (1986).

Ultimately to “reduce waste” you need to tighten the feedback loops; 1) allow the customer to discover as early as possible what they actually need rather than what they think they want; 2) be notified that you are breaking important things when you are making changes.

Bonus:
GOTO 2017 • Engineering You • Martin Thompson
Quotes from the Nato Software Engineering Conference in 1968
Proceedings of the Nato Software Engineering conference in 1968

A software system can best be designed if the testing is interlaced with the design instead of being used after the design.
Alan J. Perlis (1968)

9 Likes

All fine and dandy. We do not agree on much. Italic, bold or underlined text can be seen as a need to convince. Not everyone likes that and the succesrate is scientifically unproved.

allow the customer to discover as early as possible what they actually need rather than what they think they want;

No matter what methodology the dev apply, this seems that the dev fails to capture customers/business requirements.

We have a ton of tools to do it, from ERD, use case diagram to mock/wireframe/prototype, they can speak as requirements, design and can be used as references for testing. Contract negotiation is sometimes needed, instead of always changing the requirements at the end of the work and making the tickets to be carried over to the next sprint iteration. This what makes developers frustrated and instead further the gap between the developers and the businesses.

3 Likes

I have positive experiences with TDD. If you already are familiar with the tools, TDD might well save you time. What I’m seeing is more time spent coding, but drastically less time spent bugfixing after feedback from users.

If your software is deployed in one place and easily updated, this benefit carries less weight. If your software is installed on your user’s machine TDD is probably a timesaver.

In long term TDD is probably also a timesaver, once you start refactoring and redesign, and serves as additional documentation for new developers.

So imho there are not many projects where I would choose not to use TDD.

2 Likes

Personally I wouldn’t use TDD even in a project without a deadline. I see very little value in driving the design of each and every individual function with a unit test. Testing a boundary like a function that sends a message via kafka is exactly the right level for me and it’s the one that’s the culmination of all of the processing in the system. When that one is correct, we know that the people that depend on that message are getting what they should. The same obviously goes for important API boundaries in libraries as well as REST APIs, etc.

4 Likes