Private, a lib to test private functions in 2022

Hi again ppl,

I really like the idea of test my private functions, mainly after read this article of @pragdave, but the library that he create is a little old and the article too, the last modification on code was 2 years ago . So, I have two questions:

  • I get much risk using this lib, even it being “stable”?
  • Is really worth test many private functions as I can or the article already get a strong oposit response?

I haven’t reviewed the library but I don’t see why it would be unstable or dangerous to use? It’s a pretty small-scoped library so it might just be considered complete and does not need further changes.

This is a broad question and nobody can answer that for your project except yourself. I personally do my best to extract out all the bits and pieces that my higher-level business functions are using, into smaller utility functions, and usually make them public throughout the project. Doesn’t hurt anything IMO.


  • Have smaller amount of private functions. Abstract away things as long as they make sense as separate functions (utilize your experience and knowledge to make that call; there are no deadly consequences if you get it wrong).
  • Those private functions that end up remaining private are not important enough to try and test in isolation. You will test the behaviour of your system e.g. sending particular combination of parameters to your controller and will then check if certain DB objects have been created / updated / deleted, certain emails have been sent, or message queues filled, notifications sent etc. The private functions are not important there. If the test fails you’ll start to track what went wrong and you’ll fix everything along the way, the private functions included.

Have smaller amount of private functions.

I’ve been thinking about this topic after seeing the following tweet thread. I often make more functions public than I would otherwise just to use them in tests (or in fake data generation modules to be used in tests or seeds) which I’d like to avoid…

1 Like

I agree with “deep modules good, shallow modules bad” 95% of the time. That philosophy wants the value-add of modules/classes to be “simplify things so they are workable” which I absolutely agree with.

But every now and then you need a module exposed just so they have certain responsibility isolated to them. Example: data-access modules in enterprise settings; there you absolutely don’t want people to just use Ecto.Repo indiscriminately so having stuff like update_user (like in the tweet) does make sense. But I believe we both agree that these cases are rarer and more specific.

Having a not-super-useful module that simply points at itself and says “I am responsible for X in the project” is a value-add as well.

Point is, we must optimize for human readability and comprehensibility above all else. This means sometimes we must make small compromises with otherwise excellent practices – like the one outlined in the tweets.


I have been using Patch — patch v0.12.0 quite a lot recently. It’s very well documented and behaves as advertised. It’s a powerful alternative to Mox though it can also be used alongside it.

Private functions frequently go untested because they are difficult to test. Developers are faced with a few options when they have a private function.

  1. Don’t test the private function.
  2. Test the private function circuitously by calling some public functions.
  3. Make a public wrapper for the private function and test that.
  4. Change the visibility to public and put a comment with some form of, “This is public just for testing, this function should be treated as though it’s private.”

Patch provides a new mechanism for testing private functions, expose/2.


IMO if a function is complicated enough to merit its own test it’s probably complicated enough to merit a @doc - and putting one on a private function emits a warning (see many discussions, like this one for more).


That’s a great point @al2o3cr, though I wish there was a more supported way to document functions that you aren’t part of in a module’s public API (especially when writing or reading a library’s code). Basically I’d like to have documented and testable functions without having to worry that users of the library are calling them directly when refactoring.

1 Like

The most inoffensive way would be to write modules that have @moduledoc false or simply name it Internal and tell folks to not use it from the outside, but still document it anyway.


I tend to default to private functions that I don’t bother to test in isolation (I mostly write integration tests). If I really need to test partial functionality, I wrap the private functionality in a public trampoline:

  if @env == :test do
    def _my_private_function (...) do

I think this kind of explicit strategy is better because it unclutters your tab completion and unlike strategies to expose functions sneakily it minimizes the amount of surprise when reading code or getting stuck I’m a tight spot.

Side note: getting @env to be usable in an on-the-fly compilation in situations where you don’t have Mix is tricky, usually I save the environment in its own module


A big +1 to @evadne’s comment and I’d like to expand my thinking on this.

I try and design so that a child a module should always be considered private unless otherwise stated by documentation. If I have a module that covers a larger surface area, I break it up into smaller modules, test those, and expose those child module’s public functions as defdelegates through their parent. If I have a module that covers a very large surface area then I document the child module as public and that becomes the boundary where nothing but functions are allowed “out”.

This is generally how most libraries work. For example, Phoenix exposes three “levels” of modules. The direct children are the major features and libraries, like Phoenix.Router and Phoenix.LiveView respectively (I’m sorta making up the names “features” and “libraries” for this context). Looking at the docs, all the feature modules only provide functions as their public API even though they are comprised of one-or-more sub-modules that we don’t (well, aren’t supposed to) touch. Phoenix.LiveView is its own thing and drills down one more level where we have access to a bunch of its child modules. This children are also made up of submodules that we don’t (or shouldn’t) touch.

Otherwise, as @dimitarvp points out, nothing is set in stone and there is always room for compromise.


If I have functions in modules that only appear during testing then I am not testing the same code that would run in production.


Eh, testing has more than one function. I use this strategy the most when I’m doing TDD (I’m not religious about it) and for example it appears when I’m testing intermediate compile-time artifacts with NimbleParsec. In this case, the primary function of the test is clarity during the code writing phase and assistance in reading and understanding the code later. I personally would really hate to test complex parsers strictly in situations that are “the code you see in prod”.

1 Like

The boundary library might also help you to keep things public but still define who is allowed to use it and who isn’t

Personally I don’t think there’s anything wrong with testing private functions. So I wouldn’t worry too much about it if the library does what you need.


FWIW, my personal belief is that no one knows enough about software development to be able to make rules: it’s just too young a discipline.

But, I also believe that:

  • defining helper functions as private sets a demarcation layer in my code: if the code is to be used by a third party, then making a function private means that I’m free to change it without worrying about unintended consequences.

  • tests can be helpful for some functions. Which ones? The ones where there’s doubt; the ones which are pivotal to the rest of the code; the ones where the tests help you explore the problem.

These two things do not necessarily correlate. I often find myself writing internal (private) functions which need tests.

“But, Dave” say those in the never-test-a-private-function camp. “Why not just test them indirectly via the public functions that use them?”

Because, I say (outside of quotes), that is often a lot of work. The higher level the function, the more state it assumes, and the harder the test is to write. And, typically, when you invoke these higher level functions, you’re calling a number of lower-level ones. The test loses focus.

Let’s look at this a different way. Let’s think about the demarcation of code in GenServers.

When I write GenServers, I typically split them into two files. One file contains the server stuff: start_link, init, and all the handle_xxx calls, the other contains the implementation.

Here’s part of a typical server:

defmodule ExMdns.Server.ReceiveDNSPackets do

  alias ExMdns.Impl.PubSub
  alias ExMdns.Impl.ReceiveDNSPackets, as: Impl 

  use GenServer
  require Logger
  @me __MODULE__

  def start_link(_) do
    GenServer.start_link(__MODULE__, [], name: @me)

  def init(_) do
    { :ok, Impl.initial_state }

  def handle_info({ :pubsub, msg=%PubSub.PacketReceived{}}, state) do
    state = Impl.handle_incoming_packet(state,
    { :noreply, state }

  def handle_info({ :pubsub, _ }, state), do: { :noreply, state }


None of these functions contain any logic (apart from pattern matching on their parameters. Instead, they delegate to a second module that contains the actual logic.

The functions in this second module contain the implementation of the logic behind the server. The server module calls an implementation function with the server state as a parameter, along with any additional parameters from the handle_xxx. The implementation function returns an updated state.

Here’s the implementation corresponding to that server:

defmodule ExMdns.Impl.ReceiveDNSPackets do

  alias ExMdns.Impl.PubSub
  require Logger

  def initial_state() do

  def handle_incoming_packet(state, raw_packet) do 
    try do
      |> publish_replies()
      e ->
        Logger.error("Ignoring invalid DNS packet: #{inspect raw_packet}")
        Logger.error(inspect e)

  defp publish_replies(packet) when packet.header.qr == :reply do
    for answer <- packet.answers do
      PubSub.publish(%PubSub.AnswerReceived{ answer: answer })

  defp publish_replies(packet) do"ignoring query\n#{inspect packet, pretty: true}")

Why is this relevant to testing? Because I can test the implementation functions standalone, without firing up a GenServer. And it’s easy to do, because my tests just create the state to pass in, and then verify the state that comes back. In fact, I rarely run any genservers in test: My application.ex looks like this:

def start(_type, _args) do
    children = case Mix.env do 
      :test -> 
        [ ]
      _ -> 
          # Registration,

    opts = [strategy: :one_for_one, name: ExMdns.Supervisor]
    Supervisor.start_link(children, opts)

What has this to do with private functions?

It’s an illustration of how I believe that testing convenience trumps following rules.

Typically, Elixir developers would put the implementation functions in the same module as the server stuff. They’d then struggle with whether they should be private. Purity insists that they should: they are accessed via the interface, and so shouldn’t be exposed externally. Make 'em private, and then test via GenServer calls. Or, make 'em public, test them directly, and feel guilty.

Instead, this two-module approach sidesteps the dogma. In my apps, nothing in a subdirectory of lib/ should be accessed externally. Instead, I have a top-level api.ex that delegates each API function, calling .Server files if the implementation of the function uses a GenServer, and calling the implementation files directly for stateless functions.

It then becomes immediately obvious to any external user of the code what they’re allowed to use: anything with more than one module name in the path is internal to my application. It may be public, but that’s for my convenience and not for your use.

Do I do this consistently? No: to be honest I’m experimenting with each app I write. This is where I am now.

TLDR; forget what people tell you is right. Do what you feel makes your code easier to work with.


I do the same thing for gen servers (I’m pretty sure I got the idea from some article from you, haha).

But I tend to be, let’s say, a little bit more hardcore with the separation, I usually separate it into 4 files:

  • The server public API file:
defmodule MyGenServer do
  @moduledoc false

  alias MyGenServer.Server

  @spec child_spec(keyword) :: Supervisor.child_spec()
  defdelegate child_spec(opts), to: Server

  @spec do_something(any) :: :ok
  def do_something(some_data),
    do:, {:do_something, some_data})

  • The server gen_server implementation file:
defmodule MyGenServer.Server do
  @moduledoc false

  alias MyGenServer.Impl

  use GenServer

  def start_link(args),
    do: GenServer.start_link(__MODULE__, args, name: __MODULE__)

  @impl GenServer
  def init(some_args), do: {:ok, Impl.init(some_args)}
  @impl GenServer
  def handle_call({:do_something, some_data}, from, state) do
    {reply, state} = Impl.do_something(some_data)
    {:reply, reply, state}
  • The server business logic file:
defmodule MyGenServer.Impl do
  @moduledoc false

  alias MyGenServer.State
  @spec init(any) :: State.t()
  def init(args), do:

  @spec do_something(any, State.t()) :: State.t()
  def do_something(some_data, state) do

    {:ok, State.update_something(2)}
  • And finally the state file:
defmodule MyGenServer.State do
  @moduledoc false
  @type t :: %__MODULE__{something: integer}

  defstruct something: 0

  @spec new(any) :: t
  def new(args), do: struct!(__MODULE__, args)

  @spec update_something(t, integer) :: t
  def update_something(state, something), 
    do: %{state | something: something}

It is a little bit more work to separate the server into these files, but IMO it makes the whole thing way easier to read, very easy to test and gives a very clear API for others to use.

Yeah, I went through a phase of having the GenServer API file, but stopped doing it in favor of having a top-level file for the actual external API.

I also use a module if my state is more complex than a list, but I keep it as an internal defmodule State in the Impl file.

But those are just details: whatever works is good.

1 Like

The need to test private functions is in my opinion and experience, a serious design smell.

Testing should be focused on the external behavior of the system, defined by its API. Why? Because this is the only thing that matters to anyone who will be using it. As long as the system behaves as expected, it doesn’t matter what its internal implementation is.

If you have a component and you realize that you can’t properly test it with its API, chances are the component is too complex and tries to do too many things. Try to break it down into smaller components, each one with its own API that you can test. Then add integration tests for the whole system.

Sure, it’s more work in the short term, but your future self and your team members who will be looking at your code later will be thankful.
The resulting system will be easier to understand, extend, and refactor. Less work in the long term.

TDD can help you guide your design towards components with clear, testable interfaces.

1 Like

First you say focus on external behavior.
A bit later you say, break a component down into smaller components with its own API.

What do you mean with external behavior? In the case of a REST API, is the REST API the only external behavior? Or does every module that we implement have external behavior for some other module using it?

What’s the difference between moving a private function into a separate module and testing that vs keeping the private function in the module and test it like that?

To be clear, I don’t think there is anything wrong to move a complex private function into a separate module because that makes it easier to test, but I also don’t think it’s wrong to keep it private and test it in an other way.

1 Like

Also, code doesn’t need to be complex to be hard to test.

The behaviour of a genserver with no public api that all it does is periodically call itself to perform some side effect is built on entirely private behavior that should not be used by any other module. I’ve had to deal with a bunch of them and they are always nasty to test.

Whatever approach you take, you’re ripping the module open and testing its internals, not any “observed behavior through public api”. Theres no public api.

The process calling itself every X time is not something you can test via public api, so giving up on idealistic notions of how code should be tested in favor of pragmatism is best. This point is also raised in the Testing Elixir book by Andrea Leopardi, in the Testing OTP chapter.

You can decide to move some functions to a different module and test that because now it’s no longer “private” and you can test it “normally”, but imho that’s just lying to yourself and if theres no other reason for that module to exist(to split complex logic into smaller files, to name an example), it’s just an indirection that brings very little value.

At that point, just test the behaviour callbacks directly by providing some state. This may sound controversial, but if theres no client api then no solution will appeal to everyone’s taste. FWIW this would be just like testing a custom Ecto type by calling the callbacks directly instead of creating a schema and persisting it to indirectly test dump and cast.

1000 times this


The latter. That’s what I mean by external behavior. The interfaces between modules or services.

Breaking down the logic among separate services is likely to make your code easier to read, understand, and reuse.

Let’s say you have a REST controller that needs: to a) access the database to fetch some data b) process the data in some way and c) call some external API to publish the data somewhere else.

Option 1: You build everything in the REST controller, which will do a, b and c. The public API is the REST API, you call it in a test and check it. All the nitty gritty of the logic from a, b and c live in private functions, and you test them too as part of the test suite for the REST controller.

Option 2: You spread a, b and c across their own modules or services with clear interfaces. You test the REST API in the rest controller, and the nitty gritty details of all things that can go wrong in a, b and c in their own tests suites. Since you’re working at a lower level, you can have more low-level, focused and precise tests.

Both options work. Both could lead to a perfectly tested and functioning system. But which option do you think is more likely to lead to a codebase (and test suite) which is easier to understand, refactor, and with components that can be reused in other parts of the codebase if needed? For me, the answer is clear.

I also don’t think it’s “wrong”. I didn’t want to give the impression that these are absolute rules you have to follow. But I think you have generally accepted good design principles, even in software engineering. There are surely exceptions, but there are always exceptions. I don’t think I ever added a test for a private function or method in my career - everytime I was tempted to do it because it was “easy”, I took a step (sometimes 2) back and realized that I had uncovered something not very good about my design. That’s what I meant by design smell.