Should I go with TDD, when the deadline is tight?

Hello all,
We all know benefits of TDD, but my question is should I go with the approach when time to deliver my project is less as writing test and design code pieces could be time taking.
Of course I am looking for a better code repository by the end of day, but still this is question when I have time related concern.

Please correct me if my thinking about TDD is not good.
Thanks :slight_smile:


If it is crazy urgent then I’d ship a just working prototype. It’d have some basic tests that’d save me some time and prevent me from doing manual labour. I’d pour more time and mind towards making the project readily good enough. Once it ships, I will keep adding more tests and turn towards TDD.

Take ideas but don’t base your actions on this thread. Also, without the scope of project and time left there cannot be a definitive guide about what to do.

If it is a very simple project then go for TDD.


I think you will get different answers from different people :slight_smile: at the end of the day you have to do what’s right for you. However perhaps answering this question might help:

  • Is the project expected to grow fairly large? Is adding new features and maintainability going to be important? If so, I think that’s a good case for writing tests.

If the project is small, or once created unlikely to change much and you need to get it done asap, then maybe testing isn’t quite as important…


Another factor is how proficient you are with Elixir. When I first start with technology I never start with tests, since figuring out how to test things takes twice the time it takes to write the code, which can on it’s own be quite time consuming if you’re just starting out with a new language/framework/etc.

I’d advise to take a step back and figure out how much testing you need to make yourself comfortable with giving the product to the customers. Maybe manual testing is enough, maybe you need a solid coverage of one specific part, maybe you can’t compromise on the quality, maybe the quality is not important (prototype/MVP).


Yeah sir, it will gonna be a bigger one. Also once start it will be hard to redo.
But also we have lesser time, that’s why I raised a question

See BDD / TDD criticized (whole thread, not only the pasted text below :wink: )

“Each of us needs to assess how best to spend our time in order to maximize our results, 
both in quantity and quality. If people think that spending fifty percent of their time writing 
tests maximizes their results—okay for them. I’m sure that’s not true for me—I’d rather 
spend that time thinking about my problem. I’m certain that, for me, this produces better solutions,
with fewer defects, than any other use of my time. A bad design with a complete test suite is still 
a bad design.” (

… and nobody is concerned about the implications of this statement?

Reads to me like “no time for any testing or thinking about design (much less refactoring)”. Forget about the TDD discussion …

Ultimately an effective approach to automated testing is needed, otherwise:

  • You are wasting time with manual testing which occurs too infrequently, or
  • You aren’t doing any testing at all

highly philosophical question, so no right answer…

tests can be viewed as a design smell:

I would say getting the contexts and proper seperation/decoupling right or as to close to right is more important than complete tdd coverage:
check this video for intro to context mapping:

short answer is test your contexts interfaces.

of course certains things might need tests, like complex logic/queries, and especially things dealing with time (dst, timezones is great fun)…

honestly most of my functions are so small and simple that I feel like I’m testing ecto and not my code if I put in coverage.

you do also have to watch for DRY vs wrong abstraction… eg. don’t share one big ecto changeset for all functions, but maybe have the “correct amount” of different changesets. (better to violate DRY than do a wrong abstraction)

some might say having many tests, will hinder future changes and agility…

my gist is to focus on contexts, and the interface/surfaces to the different contexts should be small and be tested, but all that goes on inside a context doesn’t need tests(with the exception for complex logic/queries where you need to “prove” that they work in a certain way eg timezone stuff, funky joins etc)…


Yeah sir, just wanted to know how bad it could be if I don’t follow TDD approach for bigger projects.
Thanks :smile:

Thanks for the gist sir, I got your points.:slight_smile:

1 Like


I have a bit of a reputation in some circles (probably well-deserved :wink: ) for being a testing zealot from years back. These days I take a more moderate stance.

I believe in testing, but I’ve seen it both help and harm a project depending on how you wield the knife. Every test you leave in your suite is a promise to honor the interface (or ignore complaints from your automation system, which is a whole 'nother danger) so use them judiciously. Sometimes today I write tests as a harness for development then remove 50-80% of them when I’m done because they don’t add nearly enough value for the fixed point they represent.

Well designed context boundaries are a sweet spot for your test suite—these are the interfaces the rest of your system should be using, and they are supposed to be relatively fixed. That allows you to refactor ruthlessly within the boundary while trusting that your client code elsewhere will still work. If you only write a few tests, start here. They’ll save you time and headaches down the road for a minimal cost.


The effective approach is the point. The TDD link in the original question points to the test first interpretation. I do not read much in a vague text like “writing test and design code pieces could be time taking”. But if I let my fantasy take me in some predictable ways I see understandable rage towards the Uncle Bob movement: the (uneffective an paternalistic) TDD fundamentalists. Collegues, job interviewers and managers with obsessions, be it red green refactor, the vim or vi editor, the apple sign or a batterymark for my part.



1 Like

Robert C. Martin’s talks were mostly targeted towards an audience that needed convincing that it makes sense to write code for testing. He probably should have made it more clear that there is much more to effective testing than what he proposed - which is just the starting point, not the destination.

The issue with the TDD antagonism is that it is often used to let the pendulum swing too far the other way, serving as an excuse to bypass necessary testing effort, usually as a means to achieve a deceptively short “time-to-initial-success”.

Coplien’s Why Most Unit Testing is Waste doesn’t propose to stop testing. And the reference to “Waste” is a bit clickbait-ty - as he acknowledges that a lot of test code simply has a “best before date” - i.e. it is up to the developer to recognize when tests are no longer helpful and take responsibility to discard them.

Ian Cooper puts it more succintly:
Test the behaviour of the “public API” - not the implementation details; and tests being used to discover a suitable implementation need to be deleted.

be it red green refactor

The issue is training which oversimplifies it to “red green”, skipping the refactor step entirely.

Chad Fowler’s “tests are a design smell” is based on the observation that good tests can only exist if you have

  • optimal boundaries with APIs that do not expose implementation details
  • tests that verify the operational protocol against that API/boundary and not the implementation details inside the boundary.

The elephant in the room is the effective tests require optimal boundaries, boundaries which may take some time to discover. Given that it is often inconvenient to manage and minimized dependencies, appropriate boundaries are often not made a priority, leaving tests to be coupled to implementation details which can significantly add to the maintenance burden.


Yep, I guess there are swings and roundabouts. Will testing lead to a less brittle app? Will too many tests hinder development? These are the sort of questions we need to ask ourselves.

Personally, If the app is going to be large, I would test - but probably fairly lightly and maybe more for core/important features.

1 Like

Depends on team size and codebase complexity. Now if you actually write test first or after I don’t think will matter much but if code base is complex and team is large having good tests really helps.

Robert C. Martin’s talks were mostly targeted towards an audience that needed convincing

Whenever I get the impression a speaker has a need to convince me of his ideas I feel an urge to leave the room. I do not accept to be treated that way.

Coplien’s Why Most Unit Testing is Waste doesn’t propose to stop testing. 

Coplien doesn’t propose to stop testing, neither does aadii10, neither do I. You’re providing a straw man argument.
Moreover you are talking about testing, the op specifically asked about tdd in the narrow sense (test first, red green refactor). Tdd and unit tests are not the same.

Chad Fowler’s “tests are a design smell” is based on the observation that good tests 
can only exist if [..]

I do not agree with that interpretation, you do not do him justice. This is what he actually says for the record: and in he says clearly: “Tests are also a design smell. If you find yourself more time in your tests, and I don’t mean in the design of your system.”
One of his statements in the youtube link: don’t let your testsuite become an anchor. That anchor is the reason for

skipping the refactor step entirely

Here he says “do not write unit tests, they are a design smell” . He also says he finds tdd a productive way of writing software - something that I do not buy and deem contradictive to his other utterings. Again: Tests are “a piss-poor way of trying to specify something.”. (Do we need agile software development?)

1 Like

I would like to add to this discussion that I have a feeling that a lot of the quoted/cited statements are interpreted as a rigid ‘gospel law’ rather than a guideline whose exact rigidity and details depend on where it is used.

Dont make choices based on what someone else said about the situation they were in, but based on a comparison you make between the situation they were in and the situation you are in.

Personally, I do not write tests during a hackathon with a strict deadline. I do not write tests when creating a Proof-of-Concept. Tests restrict/slow down redesigning of the system which in these cases is vitally important to be able to quickly do.
I do write tests once a system is approaching a presumed ‘stable’ state, mostly to prevent regressions in the future. A certain dose of regression tests are also really good to keep your code from breaking functionality when working together with multiple people.

I do write feature/integration tests when requiring to follow a strict set of specifications/requirements (c. f. gherkin, it is super cool ); mostly to be able to agree with the other party about the details of the specification exactly.

I write unit tests mostly as doctests to explain, in a library or business logic code, to other devs (and, importantly, myself) how stuff works and fits together. This is also the main place where I sometimes work test-first, when I already know how I want something to look but have yet to write the implementation.

I write rigorous tests and possibly type-proofs if (and only if) writing important systems that are vitally important (like when a crash would be life-threatening or cost thousands of dollars) or when the system will be impossible to change in the future (like when writing Smart Contracts).

There is no silver bullet in choosing a testing methodology. It really depends on what you are doing. To answer the original question: I would probably not do Test-first development when there are strict deadlines,( but maybe I would based on other requirements of the project).

Oh, and as a side note: avoiding strict deadlines will probably improve the quality of your applications. Of course, educating managers is hard :stuck_out_tongue:.


I always do unit test. It is not always TDD, sometimes it is code before tests if I know how to implement a module. After implementing module you always should check your work. It can be manually terminal based testing or unit testing.
Couple days go my colleague did one project’s issue. Of course without TDD or any tests. And know what? He spend double time: testing in console and in unit tests after implementation.

1 Like

If you write tests rather quickly, I think in the long run it will actually save you time because when you implement a new feature or change a few lines of code, testing shows immediately all the errors (if the test coverage is good) that arose from those changes. Testing manually, you have to check all actions for yourself and you can miss some test cases

1 Like