Balancing Elixir Context Design with Flexible Web API's

Recently I have been using elixir phoenix contexts to structure my software development projects to allow for a clean/cohesive/reusable code base which works really well and makes a lot of sense. However I have trouble reconciling this with creating a flexible and open web API for front ends to use, whether that standard be JSON API, GraphQL etc.

To further understand this problem, let me give you an example, imagine you have a Users context with an exposed function called list_users which might look like following:

defmodule App.Users do
   def list_users() do

and then imagine that you have a Web API controller that uses that code like the following:

defmodule AppWeb.UserController do
   def index(conn, _params) do
       users = App.Users.list_users()

       json(conn, users)

This is absolutely fine and works great. However for rich client side applications its often the case that you will need a lot of flexibility over the data that is returned, for example in a couple of projects I have worked on we use JSON API and allow for filtering, nested filtering (in our example imagine a user is associated to a company and you want your user listing filtered by a company name), sorting and pagination.

As soon as we get into this realm things start to get tricky in my opinion, because we want to keep the context and the list function isolated without exposing too many details of the inner works (in this circumstance it’s using ecto), however we now need to support multi layers of functionality for the listing of users if we want our Web API to allow for a rich level of functionality. If you try and keep that within the context function you might end up with the following:

defmodule AppWeb.UserController do
   def list_users(params) do
      filter_params = Map.get(params, :filters)
      pagination_params = Map.get(params, :pagination)
      sort_params = Map.get(params, :sort)

    # ...reduce over filter params and use ecto composability to build the query up

    # perhaps do some pattern matching on the sort_params, if non has been passed through
    # then do a regular Repo.all, otherwise can run limit and offset etc. 

All of sudden that function has become very general and very abstract in what it can and can’t do. Not to mention that we don’t want the context function to know if its JSON API, GraphQL etc calling it, so we will likely end up formatting the request parameters into a general format that can be understand by the contexts (support for like, greater than, less than etc). This is crucial because if another part of the application such as a background job needs to do call the list_users it won’t be natural to pass parameters to it in the form of a JSON API request etc.

It feels like we have to put a lot of plumbing in to create that separation between the 2 whilst having that rich web API for the client, to the point we have needed to define a query language to pass to the list function (we would also need to create a layer that converts the request params into the form).

This approach starting to seem so overblown that a few months back I ended up writing a library that takes in request parameters in the JSON API format and creates an ecto query that you can execute which leverages ecto named bindings (see However after reading into more context design/general design principles it does feel like that is essentially coupling your web API to your database and ecto etc.

I would be interested to see what people have done in these circumstances, how people believe you should approach such design considerations.


I’m not following how you think Phoenix Contexts increases coupling. It seems like they make sense because they decrease coupling. That inherently involves abstraction. list_users abstracts the Ecto API (which is itself already an abstraction over the SQL or whatever you’re using as a Repo) and this abstraction has made your list_users function a bit complex.

In a Rails app you’d probably use a new Class to handle the complexity, in Elixir I would start by using arity and pattern matching to keep the logic separated within the context, rather than using a single function. If necessary you could add another module to your API context.

def list_users, do: User |> list_users
def list_users(%{filters: filters}), do: filters |> build_filtered_query |> list_users # this could also go in the API context so this always works on a query
def list_users(query), do: query |> Repo.all

If you had to do a lot of that for a lot of schemas I would probably consider a macro.

I use the following pattern…

# context
def list_users(queries \\ & &1) do
  |> queries.()
  |> Repo.all()
# controller

Accounts.list_users(fn query ->
  |> Accounts.include_user_profile()
  |> Accounts.filter_non_executive_users()
  |> Accounts.filter_users_by_company(company)
  |> Accounts.order_users_by_first_name()

All of the needed queries are publicly exposed from the context. This is very explicit, dependency-free, and dead simple to find the call sites where you might be using a specific query.

I previously experimented with a more dynamic method via a package, but found it to be a little more difficult to maintain and more magic than needed. That dynamic method is wrapped up in the TokenOperator package. There is a good bit of discussion on some of the things you mention in the thread for TokenOperator and another for the QueryBuilder package.


Nice and straightforward. The only suggestion I’ve is to import Accounts either at module level or at function level to save some typing.

1 Like

I’ve tried to avoid importing in a lot of cases for explicitness/searchability, but import is totally an option to slim down those queries.


Thanks for your input, I probably didn’t word if well but I agree contexts decrease coupling, not increase it. I think if you are referring to the part where I reference the json ecto builder library I built, what I was meaning to say was that library I created couples the Web api with ecto/db.

In terms of your solution, it makes sense to me and its how I’ve approached it in the past, the part which has felt slightly icky about it was that for complex sets of filtering, for example operator types (gt, lt, eq etc) and nested filtering. You end up in the context function having to parse a fairly complex query, which sometimes seems a bit bloated for whats trying to be achieved.

If taking in dynamic query parameters is more the rule than the exception for your app you might also take a look at packages like filterable, filterex, inquisitor, ex_sieve, rummage_ecto. I haven’t had the need for them, but they are catered to that sort of thing.


Very informative list, thank you!

Thanks for your feedback, that’s a nice pattern, I do like the fact that it gives extendability to a query, without exposing the inner details of the list_users function.

I think my only criticism would be that, where it doesn’t expose the repo etc, the callback function is still essentially telling the caller of the code that we are using ecto queries in the inner workings of the function.

Perhaps I’m reading too much into things, to me what seems to the ideal solution would be one where the list_users (or any context function) allows extendability through some kind of parameter format, but the internal function usage is hidden from the caller, therefore we could parse the parameters into an ecto query, however if it it ended up being some different method of data retrieval, say a separate call to another API etc it still wouldn’t matter to the caller of the function, and the inner function could format the request to the specific data retrieval method it is using.

It seems like the libraries you added achieve this. In fact, I have I’ve seen the QueryBuilder library before, it’s probably something I will look at further.


1 Like

My TokenOperator package includes that extra level abstraction that you might be looking for. I like the resulting clean interfaces, but it didn’t really improve my day-to-day clarity and maintenance of the codebase. Here are some goals of that project that might jibe with a library you use or something you write.

I would only note that the aforementioned package-less pattern is, at its core, just a context function that takes an optional argument. That argument is expected to be a function (anonymous or otherwise) that takes one argument (an Ecto query or otherwise). You can do anything in that function. In practice, it has always been an Ecto query for me, but there is nothing precluding you from injecting something else there, to the extent it provides you with an interface you like.

1 Like

This is an interesting pattern. I’ve actually been writing my context’s function to return query objects and then just piping them like this:

# controller

query =
  |> Accounts.include_user_profile()
  |> Accounts.filter_non_executive_users()
  |> Accounts.filter_users_by_company(company)
  |> Accounts.order_users_by_first_name()

users = Repo.all(query)

This is something where I don’t feel like I understand what is recommended by the Phoenix team / community regarding contexts. Your solution hides Repo inside of the context, and I could see the advantage of that, I guess, though practically it hasn’t been much of a problem, yet.

I doubt there even is a solution, which could be recommended without knowing context. Personally I feel the need for highly customizable queries is a sign of to few functions. If I need the profile of an user I would say querying Accounts.User |> Accounts.include_user_profile() is not the way – Profiles.fetch_for_user(user_id) should be. On the other hand there are people using absinthe api’s, which is basically the complete opposite: Query whatever you want as long as it’s somehow related to each other.

There are tradeoffs to be made between more options on functions vs. more functions, separation vs. coupling and even runtime concerns, like limiting query counts.


When you have the requirement for flexible queries, but want to use contexts for well defined update operations, then a CQRS approach might be worth considering.

By having separate code paths for queries and commands, you can optimise each along different dimensions.

You can start with separating the controllers for the index and show actions. Those will be primarily concerned with composing the appropriate ecto query to serve the request efficiently.

The controller for the other actions would be primarily concerned with validating parameters, possibly building a struct that represents the command and passing that into your context module.

Unfortunately CQRS is often introduced in the context of event sourcing, but I think it can be valuable on its own.


I am exploring some ideas similar to @baldwindavid’s TokenOperator (wasn’t aware the library existed when I started). The API is slightly different, and I’ve not tested it on production yet, so for real production usage TokenOperator is probably a better fit, but it looks something like this (Midas is placeholder name for now).

defmodule Blog do
 def list_posts(opts \\ []) do
    |> query()
    |> Midas.add(:id, fn q, %{id: id} -> queryable |> where([q], == ^id) end
    |> Midas.add(:user_id, &user_id_query/2)
    |> Repo.all()

I took some inspiration from Ecto.Multi and Absinthe, using Multi’s token approach and Absinthe’s resolver idea, I think the API looks rather clean while being pretty obvious what is happening when you read the context code. You are declaring options, and have resolvers that can resolve these options (just like how absinthe resolve fields).

My ideal usage would be to hide the query logic from controllers/absinthe resolvers, I don’t think they should know about what queries to run, so I disagree with baldwindavid’s approach where the controller specify queries to run. That way controllers only talk to the context function, and it doesn’t actually know how the data is resolved (whether it be Ecto, API or something else).

My current usage is to definite a lot of named functions, (list_published_posts etc), and then reuse and compose Ecto queries, this just sort of simplifies it into one interface.

Maybe this can give you some idea.

1 Like

Had some time to work on it this week so I went ahead and published it, check it out:


Congrats on the project! However, I disagree with your base premise here. Having explicit functions is better for project clarity than “magic” that infers things and gives you a pretty one liner – this is exactly what Rails does with ActiveRecord (it’s got much less magical over the years though). Pulling in an external dependency to “solve” a non-problem creates even more complexity as people now need to be aware of how to work with an extra dependency.

Ecto provides us with composable queries out of the box which is already amazing, we don’t need to add another abstraction on top of that. If you need, have a Blog.PostQuery module where you add all of your queries and then compose with them on your Blog.Posts context. I do this extensively in this project: Base Module - Query Module and it feels very intuitive and most importantly – explicit.

1 Like


I totally agree that Ecto’s composable queries are amazing. In fact, I still use them with Condiment now, but I don’t agree with Condiment being “magic”.

It is considered magic in ActiveRecord because they hide the actual implementation detail from you, as a user, it’s great because it’s simple, but then it’s a headache trying to figure out what exactly gets run. With Condiment it is extremely obvious what each field resolves to, because you explicitly whitelist a list of options (visible on the context module), and you explicitly pass it a resolver (inline or anonymous function). As a user, you just request things you want, and if you ever want to figure out what actually gets run, you can look at the context module and it’s easy to see what conditions ran too.

I understand it’s not for everyone’s taste, and a huge reason I love Elixir so much is exactly because of the explicitness compared to Ruby/Rails, but like I said, I don’t think Condiment is magic at all (I consider magic to be something you need to jump through multiple files just to understand how the data is flowing, requiring a lot of contexts at each step, so mostly things like macros). If anything, I think of Condiment as more like a pattern.

Thanks for the examples! I really like the idea of a QueryModule, right now in my projects it’s just mostly a bunch of *_query functions in the Context module. With your approach of a query module I would be able to remove the _query suffix and still communicate the intent well ( Will be using this idea at some point :slight_smile:

1 Like

I usually just use Enum.reduce. E.g. for the example you used in your readme:

def list_posts(opts \\ []) do
  Enum.reduce(opts, posts_query(), fn
    {:featured, featured}, query -> featured_query(query, featured)
    {:user_id, id}, query -> by_user_query(query, id)
    _, query -> query
  |> Repo.all()

Yup like I pointed out in the README, it’s basically a glorified Enum.reduce with some slight difference, some examples being:

  • with Enum.reduce your reduction runs in the order user specified
  • you get some niceties like validating the options user passed in is valid
  • I’d argue that the API is a bit better (your function retains the same shape, rather than having to rearrange everything to fit Enum.reduce)

The package itself is very small!

For now I’ve ported my app to use Condiment and I really like it