Agree with @hubertlepicki. The fundamental thing is that tests should add value. They can prove that your code works. Show that it works through testing. If you can do so, you are adding value.
Yes, that’s natural my hope is that by limiting the scope of the question, I get sort of the principles behind the different approaches instead of going into the specifics as to why a specific approach is better
So when you change code now, how do you know that it still works?
That’s the type of thinking that I use to separate the signal from the noise.
Each test has to have value - the primary one for me is “to have my back” when I refactor. If it doesn’t pull its weight or worse gets in the way, it has to go (which ultimately makes deciding what to test and writing good tests non-trivial).
Even Kent Beck said: “I get paid for code that works, not for tests”
Tests can not only serve as a validation tool, but can also be a design tool. They should be done from the perspective of the caller of the code you are writing and help drive out what shapes of parameters and return values are desirable to the caller rather than just ones that are convenient for the implementor.
They must always be green if everything works and at least one of them must turn red if there is a problem somewhere in your program.
False positives (red tests when there is no problem) are better then false negatives (green tests when something is broken), that would be it if I had to condense it.
I like this value-added approach
This makes sense to me
I tested it manually, had a known input and execute the function and see if the output was the one I expected. Not a great approach which is why I want to do things properly (but I didn’t know what to test)
I liked that answer, sort of makes testing personalized to the team
I advise two things. First: start simple. Second: use
mix test.watch. Live feedback is golden.
You have the start of your first test already! The goal is to automate the known input and then also the check against the expected output.
I initially didn’t understand the value of tests. I think it was because I was testing something extremely simple. Once I was testing something more complex and the manual input got complicated & time consuming and the check became painful & error prone, it made more sense.
The value of protection against future changes breaking existing code became more clear when I started testing my side projects instead of stand-alone functions or methods in the testing books/tutorials I was using to learn.
Test EVERYTHING then learn what you really should be testing. But first test everything
The longer answer should actually be more like, test everything and then understand what makes a brittle test. Identify the key components where the time to refactor tests as working code changes warrants the time investment for maintaining test code. And learn to understand when the dogma both ways does and doesn’t fit your requirements. Learn that mocking is neither good nor bad and neither are types. Finally learn that some people in the industry make a living from promoting one dogma or another so everyone is a snake oil vendor unless proven otherwise
Thank you for sharing it, I really liked the talk , I like going as close as possible to the source of information
Simple but good advice
Yeah I don’t see much value in testing things so simple
Yes, this is where I am and why I started the thread
(Coming from an automation tester)
Tests are there to automate the boring parts of testing not as a replacement for testing.
Tests like most code needs to be maintained.
Tests that don’t catch issues are worthless. A test that always passes or fails isn’t doing any works, a test that is flaky actually wastes time and effort.
(I might add some stuff later)
Tests are not only about the code right now, but also any future additions you might make. Even if a calculation is simple right now automated tests are a simple way to ensure the results stay the way they are even if you need to bolt on other business requirements later. Take just a simple function and put instrumentation, metrics collections and extensive logging onto it and it’s no longer the simple piece it was before.
I’m reading Property-Based Testing with PropEr, Erlang, and Elixir (pragprog) at the moment; cool stuff!
So, I suppose my advice would be: testing can be so much more than just a suite of hard-coded sample data.
My personal favorite: tests are not primarily written to assert that your code is correct. Tests are there to lower the cost of changing your code.
In other words, the beneficiary of your tests isn’t so much you and your code right now, but whoever will change your code in the future.
Being mindful of who you write tests for helps a lot in choosing what to test and how.
One simple rule is after you decide what to test, be sure to first make the test to fail. Then make it pass.
Knowing what makes an effective test and what does not, and you can only figure that out through the experience of implementing tests and arriving to your own conclusions.
Lots of good points here and don’t want to dupe responses.
Biggest pet peeve is chasing code coverage. All too often, this creates a suite of brittle, ineffective tests.
Also a big fan of the exercise of having colleagues write tests for the implementation created by their team members. Helps validate/invalidate the overall design of the implementation.
Also BDD / TDD criticized
Number one advise is: Just do it.
Second advise is (as @csisnett also says) : Start simple.
As many others have noted here, there are tons of approaches to testing. If you’re lucky some of the approaches are spot-on to what you’re doing. If not you need to try some things out, and rework them along the way.
Don’t be afraid to do something wrong. You will get better along the way, so refactor your tests as necessary as well.
If you are working on an existing code base, start with writing tests for new code, or for code-paths that you are most comfortable with. Then let things grow from there.