This is a philosophical question, but what advantages are there to defining private functions? You can’t test them, you can’t even put a doc-block on them. I figure that alone is enough to make me wonder why they exist. Back in the old Perl days, the “private-ish” functions would begin with an underscore, but it was up to you as to whether or not you risked using them. Granted, that can spell trouble in OO land, but I don’t see that as much of a problem for a functional language.
Without the private functions all the functions will become public, I think the idea here is to define the interfaces as public and hide the logic within private functions. For example if you want to have a function to make coffee there can be a lot of logic that can be extracted as many small functions. But these functions doesn’t make any sense without the main make_coffee function. So defining those as public doesn’t make sense and we don’t want them to expose outside the module. So basically this helps in abstracting the logic.
One of the things that I try to keep in mind as I’m writing code is that the code should communicate intent. The structure of the code should help to explain to the person coming along behind you what the code is trying to accomplish.
In my opinion, creating a private function a defp function communicates that this function is an implementation detail of another operation. Basically it says - “if you want to understand what this module is supposed to do, you should probably be looking at the function that calls this one”.
It provides smaller public API, and no, @doc false do not provide the same, as it just “hide” function from documentation and doesn’t prevent end user from calling it.
For “hardware”
It allows compiler to optimise private functions more (for example inline them) which is not possible for public functions, as these must be present in code for remote calls.
In short it provides you faster and harder to misuse code. For me it is 100% win.
If private functions are big enough that you need to make them testable then you should extract them to separate module anyway. Alternatively, if you are brave enough, you can write tests within the module itself.
I think ANY function should be testable. It’s not a question of size. It’s a question of coverage. Some of the functions don’t make much sense anywhere else.
I disagree, but rather than try to change your mind, I tell you to live your belief. Write no private functions. Your testing concerns are solved. But recognize that not everyone agrees with you, and therefore they have reasons to use private functions. No need to question why defp exists, just realize that others see value in private functions.
I understand why you’d think this as it sometimes seems like the easy path forward since often core logic can exist in the private functions and the public functions do very little in terms of work. However, theres something thats missing here, and its one I learned from Ruby. Tests become brittle when they know too much about the implementation, i.e. the private functions and their interface.
In Ruby, we stub too much stuff out in order to make it easier to test and then when the implementation changes the tests break even if the code works. Here, you’re suggesting something similar, testing the private function directly in order to make it easier to test, but while you’ve made it easier now it will become more difficult later as the tests now have too much awareness to the implementation. Tests should validate the public contract of a module or system, and when your tests look behind the curtain like this you sacrifice the ability to refactor your code quickly and that affects long term maintainability and productivity.
Unfortunately, non of the answers above have really answered the question in detail.
Arguably, the biggest advantage of using private functions to to abstract the implementation away.
This means, that the details of a piece of functionality are locked.
If you have code that relies on parts of a module, and you change a module, you don’t know if you are breaking code that relies on those functions in your module.
If you have a module with a function that takes in arguments, and sends back a final result, with no other piece of the module public, you are free to change the implementation of your module, with 100% certainly that your change is not breaking code that may be relying on parts of the module.
Of course, if you break the data that the function returns, that is another story…
You don’t want to test everything. Tests exist to stress the interfaces, not their implementations. This is a principle from Kent Beck, the creator of TDD itself.
You can sue a Sandi Metz trick: do test your private functions. Then delete the tests
This is an interesting discussion. Thank you all for your thoughts. Yes, the app gets really brittle when everything is tested… while I’m working through a problem, I find it invaluable to test the smaller components, but by the time it ships, it’s only the integration points that really matter.
I define each as a component and I unit test its public interface. I may create tests for private functions but I delete them after or just comment them.
Then when I need to test several modules, I do integration testing
The way I approach testing is to test the behaviour rather than the implementation. defp helps to do this; abstract away the implementation behind private functions, then the only testing that can be done is of the behaviour of the public interface.
How the goal is achieved is generally not very important – this may change often, and as others have said, can lead to many brittle tests and lots of refactoring. What is more important is the outcome: whether the module behaves the correct way. As @Fl4m3Ph03n1x mentioned, this is what Kent Beck originally intended with TDD.
If you’re concerned about the behaviour of the private functions, exercise the public interface with more, targeted, tests. BDD is a great approach for this, and can help you spot gaps in your testing due to the nature of the test names.