How to determine contexts with Phoenix 1.3

Downside of this is because a lot of tools requires belongs_to/has_many to do some things automatically. For example ExAdmin.

btw how to preload things then to avoid N+1? For example if I have list of posts and there is field :user_id, :integer and I want to add user name at list of posts.


With respect to the developers working on ExAdmin (and ActiveAdmin, which was the inspiration) these tools run completely contrary to building maintainable software. They encourage the tightest coupling not only between parts of your own software, but to the libraries they are built on. If you are building a small CRUD application that you need a quick-and-dirty admin interface for, by all means, they’re useful for prototyping such a thing. But if you are building an application that drives your business and you let your architecture and techniques be dictated by the constraints of a protoyping tool… man, you are in for a world of hurt. Its been awhile since I looked at ExAdmin, so maybe its come along since the last time I looked at it, but the last time I did it had the same shortcomings as ActiveAdmin in an utter and complete dependence on Ecto’s associations.

I personally don’t really like using examples like a Blog for illustrating an idea like contexts because its such a simple modelling domain that any distinction between contexts is very contrived, which leads to questions like this one. (I’m not knocking you for asking it, btw, its a perfectly legitimate thing to want to do in that context.) Lets say then that blogs are a smaller feature within a larger application and you want to separate an authentication context from a content context. One thing I would do here is create a separate schema backed by the same table with only the relevant information (e.g. name, email) in it. Then you can create associations between this new version of a user (we’ll call it Author, backed by the users table) and the Post schema. However, unlike the Account version of a user (from the Auth) context, you wouldn’t have changesets available to create a new author or even update the author record.

This isn’t the only way to preserve boundaries between contexts—I honestly feel like its cheating a little because you’ve got them coupled at the database level, though this does preserve some independence between them. In a richer domain it would make sense to reach for other tools. But also, if your application is really just a blog, then you’re in something that is A) quite simple, behaviorally, and B) straight up CRUD. I probably wouldn’t go reaching for a lot of separate contexts for this use-case.


The way I see contexts is pretty much the way I see microservices: they should be isolated from each other and, when you require something from another context you should ask directly to that context that takes care of that part.

The idea behind contexts is great but, I have some fears on how that will shape the community. They totally make sense in the context of an monolithic project (remember you can have monoliths in umbrella apps as well), not where you, potentially, have a bunch of Phoenix projects decoupled because each project would have it’s own context, but you want to have them all together in the same repo just for the sake of simplicity. In short, I see value on context with monoliths just for the sake of organization and intent (as Chris said on his talk), but not as much on the microservices landscape.

My fears go this way: how do we avoid building distributed monoliths? Or worse, how do we avoid building monolithic distributed systems? You see, while contexts allows us to have two different User schemas for different purposes (authentication and e-commerce), with different column fields, which makes totally sense to me, it also brings some options on how to proceed with the database. Should we use a single table for all kinds of user stuff? Should we create separate tables namespaced with their intent (currently Phoenix 1.3 generators do this)? Should we connect these using relationships? Should we use different databases for this?

In my humble opinion the message is good but there are issues with the generators right now. Most of us makes CRUDs every single day, and, let’s be fair, most of us use the generators to speed the generation of CRUDs as well, Just adding missing pieces like authentication, authorization, etc… I believe this is the point where most of the posts with questions or critiques are set.

Phoenix is an amazing tool for building distributed systems and I believe that the new generators and contexts are a good step towards the future of the platform. Maybe if we get more flexibility on the newer generators it would solve most of the issues people are having right now.


The user_api @gmile mentioned reminds me a bit of the “fat models” of rails.

I am also a bit confused about how to structure things. It is not really clear if we should define a context as the entry point to a type certain of data or as the entry point to a feature, or both. The user_api being the first, and a sales context being the second.

The other point that is not clear to be, is what to use to pass data around.

I mean, should it be:

my_user = UserAPI.get_by_email("")
my_plan = PlanAPI.free_plan

Sales.activate_plan(my_plan, my_user)
# or
# or delegate more and let sales query the other contextes?
Sales.activate_plan(:free, "")

Everyone who is building APIs or have multiple data sources have to solve this problem. In Ecto, it would be as straight-forward as:

from(u in User, where: in ^, & &1.user_id)
|> Repo.all()
|> Enum.group_by(& &

The result of those three lines is a map with the user id as key and the list of users for that id, which in this case is always a single entry, as value.


That’s precisely the goal. :slight_smile:

We need to remember that not everyone will use Phoenix to build a distributed system. However, I would say that the current contexts are a good stepping stone for building them: if you have troubles breaking your codebase apart into contexts in a single application, it is most likely that you will have trouble finding the boundaries in an umbrella project or in a distributed system.

It is also important to remember that generators, especially the ones under phx.gen.*, are learning tools. So don’t expect them to become more flexible. They will never get flexible enough to allow developers to generate all kinds of applications Phoenix can be used for.

I would say we are very much in agreement. It is a good beginning and we could do more but we also need to remember that not everyone wants or needs more.


Thanks to pattern matching in function definitions you could potentially have both V1 and V2. While if you really want to follow the open closed principle, you should rather use the last or at least the second variant, so you never have to update the Sales context if the UserAPI or PlanAPI does change. I personally wouldn’t worry about it too much as activate_plan/2 is most likely a pure function anyway (just pulling out data from the structs) and therefore easy to move/update.

1 Like

@josevalim but that’s precisely my suggestion for umbrella projects: having generators tailored for those of us who want to build microservices (aka a bunch of Phoenix APIs inside an umbrella project). Maybe like a configuration thing, the same way we configure the usage of binary_ids.

1 Like

Can you provide more specific feedback on what you need then? Because if you mean the ability to generate context in one app and web files in another, this is coming in the next RC. If you mean more than that, then rolling your own will likely be the way to go.

1 Like

Sure Jose,

My critique towards the contexts is that when we build microservices the “context” is contained inside the service itself, so it’s just verbosity added. For monoliths I see value in contexts, as I explained earlier.

For example, if I have 5 phoenix applications inside my umbrella project

|-- apps
|-- |-- auth_service
|-- |-- ecommerce_service
|-- |-- forum_service
|-- |-- marketing_service
|-- |-- backoffice_service
|-- ...

For applications tailored like this, I don’t see the value of contexts, because they are contained inside their own app/namespace. The older format is more tailored for these things.

My suggestion is toggling the generators - in a similar manner we configure binary id generation - at least for umbrella projects. These generators should be quite similar to the old ones, with the modifications tailored to the new project format.

1 Like

Yes, this is coming in the next RC. You can disable generating the context, so you generate the web part only and implement the missing bits as you wish, possibly as a separate app. You can also call inside the umbrella to generate an app with its own Ecto repository.


I simply wish the best for Elixir and Phoenix.

@josevalim I suggested in a separate thread for the Elixir and Phoenix team to create samples that can help people like us to grasp these concepts. That is one thing that helps languages/frameworks like aspnetcore, angular2 etc.

If there are samples backing all the explanations you and @chrismccord have given, you will have to talk less.

Thanks for the upcoming RC. Any idea the release date?


Hi, I’m new just coming from Rails, but I saw the video talking about Phoenix 1.3 and was amazed about the new contexts concept. I am eager to build a little application with this.
One thing related I am confused about is where I should put the intermediate table. For instance, here I want to create a chat application, I have a User and Room. It should be a N-M association.

 |   |--user.ex
 |   |--room.ex
 |-- services.ex

And now I’m wondering where I should put my room_users.ex. I want to separate User and Room because User should be used in many other places. What should I do?

1 Like

Well, depends on how you want to approach it.

The first approach I would see is to have tables namespaced by context with no relations cross-contexts schemas. So you would have two different tables for users - one in the services context and another in the account context. The tricky part is to coordinate changes between then, which you could do using transactions - again, passing through all contexts that have the user schema for example if you want to change the user. The other way is to create a small GenServer module, where you would do a simple PubSub registration/listening and, everytime one of your users tables are changed it should broadcast a message with relevant information that will be captured by the other PubSub GenServers, so it can be propagated into their own versions of users.

The first approach albeit simple, when you need to scale into microservices your monolith will force you towards the second approach and you’ll have lots of pain because you coupled your cross-contexts relations in the same app. The second approach is more decoupled and, as I stated before, can be moved towards its own service, separated database whatsoever without you caring much of what’s going on.

It’s either a mixture of personal taste and what you’ll need. Do you think you’ll need to scale to millions of users? If yes, then, I would go second approach. If not, well, the first one is cheap and does the job as well.

1 Like

I think your approach is pretty well tied to the Rails way of thinking here. What you’re trending towards is tightly coupled contexts, which is worse than just building a monolith.

For example, why are you storing them in the database? A user being in a room is inherently session-based—this is where you should maybe reach for a GenServer instead of writing to the database. If you want to store a list of rooms the user should be automatically logged into when returning, maybe consider a context for user preferences. Then a layer higher up can orchestrate between the preferences and the room subscriptions.

Secondly, why does the room need to know anything about the user, like its account information? Would simply telling it which IDs or nicks that are currently in the room suffice for it to do its job?

Try to take a step back and ask why it seems hard when you run into questions like this. Generally I find that trying to puzzle where a particular notion goes is a signal to me that I’ve gotten a boundary wrong.


I really like the second approach. I personally I am struggling to understand the GenServer stuff. I don’t know if you can write a sample on how PubSub can be achieved.

I read about bounded context yesterday and I was wondering the best way to implement it in Phoenix.

Thanks for your response

@smithaitufe I suggest you create a small elixir application and play with Supervisors, GenServers, GenStages, etc… :slight_smile:

@imetallica Thanks. I have started studying those topics now. GenServer is making sense now.

I don’t know if you can write a sample on how PubSub can be achieved.

I’m not sure if that’s what you need, but have you checked Registry module in elixir? It has a note about how to use it as a local pub/sub system in the docs [0].

You can put it in a separate app under an umbrella and let processes from other apps subscribe to certain topics in it.


defmodule Test.PubSub do

  def start_link do
    Registry.start_link(:duplicate, __MODULE__)

  def subscribe(topic) do
    Registry.register(__MODULE__, topic, [])

  def unsubscribe(topic) do
    Registry.unregister(__MODULE__, topic)

  def publish(topic, message) do
    Registry.dispatch(__MODULE__, topic, fn entries ->
      for {pid, _value} <- entries, do: send(pid, message)


defmodule Test.PubSub.Application do
  @moduledoc false

  use Application

  def start(_type, _args) do
    import Supervisor.Spec, warn: false

    children = [
      supervisor(Test.PubSub, [])

    opts = [strategy: :one_for_one, name: Test.PubSub.Supervisor]
    Supervisor.start_link(children, opts)

To subscribe for updates in user app in a phoenix channel you would do something like

defmodule Test.Web.UserChannel do
  use Test.Web, :channel

  def join("user:" <> name = topic, _params, socket) do
    {:ok, _owner_pid} = Test.PubSub.subscribe(topic) # subscribe to updates in "user:#{name}"
    {:ok, socket}

  # and handle these updates
  def handle_info(:sacked, socket) do
    broadcast! socket, "sacked", %{}
    {:noreply, socket}

Somewhere in the user app you would publish the message :sacked in case of a ban

Test.PubSub.publish("user:idiot", :sacked)

These are basically the examples from the Registry docs, and there are some more about other possible uses for it.

Note however that this pub/sub implementation is local. And if you are using phoenix, you might want to pick Phoenix.PubSub [1] instead which is not local, so that for the example above (with channels) you would probably use phoenix’s pub/sub library.


Using Phoenix’s PubSub is better for this because it can broadcast messages between nodes, either using Erlang’s pg module, Redis or RabbitMQ. I think there’s a Kafka as well, but I’m not sure.