I wrote many projects, never a single test.
A risc to be aware of while practising tdd is giving up on cohesion while decoupling to get more testable units:
http://david.heinemeierhansson.com/2014/test-induced-design-damage.html (link was already provided in BDD / TDD criticized)
Some devs especially the ones who do TDD, may think that tests are also the software requirements/specs. There are rspec and cucumber, their communities are some of the most prominent TDD supporters. I still remember that Michael Hartl originally endorse TDD with Rspec in his astoundingly popular Ruby on Rails book. That book is read by so many Ruby on Rails developers, thus began the era of TDD in Ruby on Rails, until DHH himself denounce it, and Michael Hartl would later erased TDD from his book.
TDD actually tries to break the traditional mindset of old Waterfall methodology, where requirements are defined as a whole in the beginning. Then you do the coding phase. And then you test that code in accordance to what required (in the requirements) in the first place.
TDD was born inside the Agile mindset. There are 4 points of Agile Manifesto:
- Individuals and interactions over processes and tools
- Working software over comprehensive documentation
- Customer collaboration over contract negotiation
- Responding to change over following a plan
Individuals and interactions are embraced by pair programming and customer collaboration. Working software is embraced by TDD. Customer collaboration is embraced by shorter periods but more frequent of continuous delivery, TDD tries to fill in this point as well, especially with something like cucumber. Responding to change is also embraced by TDD with its red green refactor.
Agile also state: Working software is the primary measure of progress. In order to achieve this, TDD supporter promotes TDD as a help tool, because TDD can helpfully assist devs on ensuring green state on every delivery.
Some extreme TDD practice writing the specs/cucumber during conversation with customers/business people. Some other practice TDD with pair programming, one dev do the spec, the other write the code.
There is also a tool in Rails that will automatically run the tests after the code has been changed to help pair programming quite easier (But Uncle Bob himself disagree with this).
So there you go, the origin of TDD. Sounds reasonable.
Do I practice it? Nope. So many times, for me, some principles from the old Waterfall just works. Plans, write requirements, designs and think ahead before I write the code carefully. Test accordingly to those plans and requirements.
Plans changed? Go back to the first phase of planning before writing another code.
That is your prerogative. However that doesn’t change the fact that many organizations, especially smaller ones, proclaimed we don’t have time for testing before Extreme Programming coined the term Test-Driven Design. Testing was often delayed to user acceptance testing at which point in time defects were layered upon defects. Fixing defects later usually costs more time and money and that is waste.
People rarely have the discipline to go back to build tests around code they already believe to be working (and verify that the test will in fact detect a defect) - because they could be writing more code for production instead.
So when I read
time to deliver my project is less as writing test and design code pieces could be time taking.
What I see:
- We barely have enough time to develop the product if even everything goes according to plan and every decision made is the right one.
- We need to start generating “product code” immediately so there is little time “think about it” (i.e. design).
- We have no time to write code for tests as we barely have enough time to code the (right) product.
which is a recipe for disaster. So yes, I actually believe that the opening post is seeking to legitimize minimizing and potentially completely eliminating any automated testing effort - it’s not just about TDD.
For me TDD is about product code always having automated, executable tests that documents it’s behaviour, creating the opportunity to refactor without hesitation. I personally don’t care if the tests are written first - but they have to be red first. And in my opinion heavy use of sophisticated mocking libraries is a smell. Your tests are telling you:
- Your code structure is suboptimal
- Your boundary is in the wrong place
- Implementation details are leaking through the boundary
- The test is in the wrong place
In reference to Chad Fowler: “Tests are a design smell” is meant to be a provocative statement towards a community which glorifies 100% test coverage.
Don’t let your tests be an anchor and maybe it’s more important to monitor the runtime behaviour of your code than it is to test it.
i.e. it’s about balance:
- static type checking doesn’t replace testing
- tests can’t replace runtime monitoring
- some scenarios are too costly to test - provided the manifestation of a defect in production is largely inconsequential, will be quickly noticed and will be quickly addressed and rectified.
Also he talks about ‘Code “this big”’, i.e. code that is small enough to be replaced wholesale. After a rewrite, how do you know it is “mostly safe” to deploy the rewritten code to production? The approach strongly suggests the existence of a test harness emulating the real operating environment that can run tests (scenarios) against the rewritten code. I.e. the tests only verify the correct behaviour of the “component” and are isolated via the constraints imposed by the operating environment.
I can understand the cynicism that you project towards certain consultants and consultancies that market TDD/BDD as some kind of easy street cure all (once again focusing more on process than intent) but one shouldn’t “throw the baby out with the bathwater” and more importantly not let it become an excuse to cut back on essential testing activities (not implying that you were suggesting that - but in this topic I think there is a very real danger of that interpretation).
Testing isn’t easy and good testing doesn’t “come naturally”.
I think it is important to remember that “novices” want hard and fast rules because they’re easy. “It depends” while usually appropriate isn’t exactly helpful from a novice’s perspective. Dogma via “Development by Slogan” be it DRY, TDD/BDD, etc. can be a real problem.
Tests restrict/slow down redesigning of the system which in these cases is vitally important to be able to quickly do.
This one is a bit of a slippery slope - we’ve all done it. But again context determines the risks we are taking in doing so. In some circumstances the risks are low enough but in others designing your tests (scenarios) is designing your software.
(c. f. gherkin, it is super cool )
I would probably not do Test-first development when there are strict deadlines
Statements like this are dangerous because they will be construed by outsiders and novices that TDD in particular and testing in general is inefficient and expendable/optional. It is intuitive that you will save time now if you don’t create the test right now. It is counterintuitive how much more expensive many defects will get the longer they remain in the codebase.
That is likely a reflection of the size of the projects. The flip side is that while it may feel like overkill to use testing tools on smaller projects, it’s the perfect time to become accustomed with them before you embark on a larger project. Tests are also a feedback mechanism - if it’s hard to write a test then there may be a problem with your design. The worst thing is if you have to introduce testing after the fact, especially if there was no other incentive to decouple in all the right places.
Managing the Development of Large Software Systems (1970) i.e. “the Waterfall Paper”. Figure 2 shows “ideal waterfall” - figures 3 and 4 acknowledge the realities, i.e. iterations will happen - which is acknowledged 16 years later in A Spiral Model of Software Development Enhancement (1986).
Ultimately to “reduce waste” you need to tighten the feedback loops; 1) allow the customer to discover as early as possible what they actually need rather than what they think they want; 2) be notified that you are breaking important things when you are making changes.
A software system can best be designed if the testing is interlaced with the design instead of being used after the design.
Alan J. Perlis (1968)
All fine and dandy. We do not agree on much. Italic, bold or underlined text can be seen as a need to convince. Not everyone likes that and the succesrate is scientifically unproved.
allow the customer to discover as early as possible what they actually need rather than what they think they want;
No matter what methodology the dev apply, this seems that the dev fails to capture customers/business requirements.
We have a ton of tools to do it, from ERD, use case diagram to mock/wireframe/prototype, they can speak as requirements, design and can be used as references for testing. Contract negotiation is sometimes needed, instead of always changing the requirements at the end of the work and making the tickets to be carried over to the next sprint iteration. This what makes developers frustrated and instead further the gap between the developers and the businesses.
I have positive experiences with TDD. If you already are familiar with the tools, TDD might well save you time. What I’m seeing is more time spent coding, but drastically less time spent bugfixing after feedback from users.
If your software is deployed in one place and easily updated, this benefit carries less weight. If your software is installed on your user’s machine TDD is probably a timesaver.
In long term TDD is probably also a timesaver, once you start refactoring and redesign, and serves as additional documentation for new developers.
So imho there are not many projects where I would choose not to use TDD.
Personally I wouldn’t use TDD even in a project without a deadline. I see very little value in driving the design of each and every individual function with a unit test. Testing a boundary like a function that sends a message via kafka is exactly the right level for me and it’s the one that’s the culmination of all of the processing in the system. When that one is correct, we know that the people that depend on that message are getting what they should. The same obviously goes for important API boundaries in libraries as well as REST APIs, etc.
I would highly recommend you write tests. If you do not, you are just creating technical debt for tomorrow.
This is a personal preference, but I can’t stand working with code that does not have tests.
How is a new programming supposed to come in and work on your code-base if you do not write tests? They will have to learn all of the nuances and apis your brain has come up with. The alternative is: They come in and run your test suite, and begin writing their own tests and GET TO WORK. They can read your unit / integration tests in order to learn how the code works instead of reverse-engineering it from your code.
You know that thing you do where you open up postman, and hit the “send” command and look at the payload value to test if your code is working? You can do that automatically if you write unit and integration tests.
Tests are documentation for how to use apis. Tests save you time by not having to go through the REPL loop for every little thing you want to do.
TDD specifically? That is up to you. you could probably get away with writing integration tests at the boundaries of your api at the very least. But trust me: Woe unto the person who decides writing tests is too slow. It’s not. It’s faster and safer and will lead to better quality code.
Edit: I’ll add that personally I like using TDD, it’s a more natural workflow for me.
Just one thing, I think that readability of code is more important that having test, be them good or bad, when it comes how easy it’s for new programmer to come in and work on code base. If new programmer have to reverse engineer your code instead of just reading it, it’s a symptom of a serious problem, and having tests or not is not really any solution for a bad codebase.
A primary cause of complexity is that software vendors uncritically adopt almost any feature that users want. Any incompatibility with the original system concept is either ignored or passes unrecognized, which renders the design more complicated and its use more cumbersome. When a systems power is measured by the number of its features, quantity becomes more important than quality. Every new release must offer additional features, even if some don't add functionality.
Time pressure is probably the foremost reason behind the emergence of bulky software. The time pressure that designers endure discourages careful planning. It also discourages improving acceptable solutions; instead, it encourages quickly conceived software additions and corrections. Time pressure gradually corrupts an engineers standard of quality and perfection. It has a detrimental effect on people as well as products.
more complex problems inevitably require more complex solutions. But it is not the inherent complexity that should concern us; it is the self-inflicted complexity.
Increasingly, people seem to misinterpret complexity as sophistication, which is baffling -the incomprehensible should cause suspicion rather than admiration.
That’s why lots of agile/scrum projects burst and die. Probably the main reason why most startups fail. This might be an exaggeration, but we tend to forget about careful planning and design which make our code written so quickly every sprint with no specific plans (or even standards) made for the years ahead (because the business said so, because the deadline said so). Combined with the fast-paced always changing/always updating/always new/always obsoleting technologies (like js), it almost looks like a snowball effect for the devs. We can’t avoid it.
I always look at the oldies (engineers who work on old businesses like banking, stock-trading, transportation, etc), they always work with careful planning and specific kind of professionalism and contract negotiation, something we don’t see in startups world. They might not get hyped with the new shiny-wow-js-library out there. They probably don’t care about TDD or pair programming. But they know, they damn-sure-know what they are going to code for the next year.
Meanwhile, most of startups engineerings (tech department, or engineering department) are perfect reflection of their business counterpart, quick iteration and quick market validation.
Developers in new business market are getting better at writing codes, lots of coding bootcamps are founded, but there are 2 important things in software engineering missing: planning and designing, which probably has small interest in developers mind nowaday. Developers no longer know how to plan, document and design their system. We are no longer involved. And we are no longer allowed to.
This is the origin of the death of software design tools (why do we need one, when we have no careful planning at the first place?), and the rising of scrum/kanban whatever. Our plans, designs and documentations are there, right on the board. Read them. And guess what, near the end of the sprint? Let’s change it, so we have to adapt with it, refactor everything we have written, even worse, the ticket we are working on suddenly got discarded. And btw, we have to do this in only 2-3 weeks. Just a board with texts (given bla bla bla, i want to bla bla bla and then bla bla bla) and whatever. No clear communication, no clear agreement, on how to translate this business requirements into technical aspects. (Doesn’t this lead to misinformation and wrong delivery of the products to customers? what is wrong, what is right? our documentation are only some paragraphs of subjective texts that we, the devs must subjectively translate into technical aspects.)
x: “Well, this feature is big, don’t we need to carefully draw this requirements into some kind of charts we agree on, so we can know better in overview and in detail? something like a database diagram or a use case diagram, we might draw some simple flowcharts to validate this design in accordance to the current one, so we can catch early design issue and adapt it with the current system on production.”
y: “do you have time for this?”
x: “no, but we can surely extend the schedule”
y: “we don’t have time”
x: “at least some UI mockups”
y: “just code and ship it, make sure it is green, this ticket is only 11 complexity points, you got 3 weeks”
x: “do you have any future plan that might correlate with this feature? at least something that we can anticipate in this feature?”
y: “we don’t know yet”
Like it or not, this is something that we have to deal with every day, until the end of our career.