Opinions/perspectives wanted: abstract data API or not?


I’ve been following Jose’s advice on mocks at http://blog.plataformatec.com.br/2015/10/mocks-and-explicit-contracts/ and been quite happy with the results thus far.

I’ve come to the point where I need to decide whether to mock the connection to my datastore. I’ll be using a traditional relational database in production to persist state changes in my app. There will be points where my app will need to query the data store to retrieve information to act on, and also put results into the data store. Pretty standard stuff.

So, is the data layer worth mocking? I can see arguments on both sides.

I could skip any abstraction except for what database to connect to in prod vs. dev vs testing. That would be simple and workable.

Or I could build a data interface as a behavior, and implement the behavior for both my relational database and a mock data store used for testing. More work, but then I’ve decoupled the database from my test suite, and left open the possibility for any other crazy data store in the future via building another implementation of the data API behavior.

Has anyone else wrestled with a similar question? Curious to know…


Halfway answering my own question, I found this very clear answer from José on a sane unit/integration testing approach for Ecto-based apps: https://groups.google.com/d/msg/elixir-ecto/BKpLf092dWs/VaCvfZpEBQAJ

That seals the deal for me on not mocking the database connection.

I’m still debating whether to wrap my database queries with an API via a behavior…


It’s been my experience, in 23 years of professional software development, that while it is a nice theoretical idea to leave an abstraction layer so you can replace your datastore, in practice, it rarely occurs. Only you know your specific situation and have the best knowledge to make that risk assessment, but in my experience it is rarely worth in.

I’m actually learning postgres more deeply myself because I know it is powerful and I expect my code to be tied to it.


I’m still debating whether to wrap my database queries with an API via a behavior

Rather than wrapping, I like to expose functions that compose Ecto.Changeset and Ecto.Multi. Letting you explicitly submit the changes to the DB near the edge of your application.


Would you be willing to share a bit of example code to illustrate your approach? I have an idea of what you mean, and I think seeing some actual code will help it click fully into place.


Here’s a snippet from https://github.com/everydayhero/scout/blob/master/lib/scout/core/core.ex

The idea is that the top level module “Phoenix context” will map the incoming params onto a command, which will run, returning a Multi. None of the effects hit the DB until the Multi is sent to Repo.transaction.

  @doc """
  Records a survey response.
  returns {:ok, %{survey: survey, response: response} on success, {:error, errors} on failure.
  def add_survey_response(params) do
    with {:ok, cmd} <- AddSurveyResponse.new(params),
         {:ok, survey} <- find_survey_by_id(cmd.survey_id) do
      |> AddSurveyResponse.run(survey)
      |> Repo.multi_transaction()
      {:error, changeset = %Changeset{}} -> {:error, ErrorHelpers.changeset_errors(changeset)}
      {:error, errors} -> {:error, errors}


Yep, clear now. Thanks for pointing me to that demo project, very educational!