Phoenix is not your Application - questions

Hi All,

I just watched @lance’s talk from ElixirConfEU and I want to talk about it :smile:

Lance discusses the idea of putting all of your domain logic in a vanilla OTP application, and then bringing that in as a dependency to a separate Phoenix application which is merely an HTTP interface for your existing application. I really like this idea but I had a few questions.

  1. My understanding is that Phoenix is creating a new process for each HTTP request so that all can run in parallel. If we keep all of the domain logic in the Phoenix controllers/schema/changesets, we take full advantage of this parallelism. My understanding of the talk was to move the application logic to a separate GenServer, which is one process. Won’t we then have multiple Phoenix processes waiting in line to get a response from the single GenServer doing all of the work? Maybe in the real world (beyond an example app) we would need to create some sort of pool or dynamically spin up workers to handle each request?

  2. How does it make sense to organize the code in the OTP Application? Would you want a separate GenServer for each domain model?

I’d appreciate any thoughts or guidance. Thanks!


Lance’s example happened to use a GenServer to hold state, but it could as well have been a plain old module without state. So your first tradeoff isn’t relevant to splitting your domain concerns into an umbrella apart from your web concerns because you can still call directly into the domain module, i.e.: MyApp.Accounts.list_users(). That could be a function on the my_app application that is called from your my_app_web application, say in a controller, and internally it would look just like any code you’d bundle in the same application today. It might ask an Ecto Repo to fetch data, or hit disk, or anything, but it can all run in the caller’s process. Make sense?

  1. How does it make sense to organize the code in the OTP Application? Would you want a separate GenServer for each domain model?

No, some of your domain will definitely be modeled in processes because you’ll need to hold state, handle failures, etc, but this decision like above has no bearing in splitting your applications in an umbrella.

caveats: The time where processes may come into play even for seeming “pure” domain concerns are if you want to do service discovery or call into a process on the cluster with code not running in your current VM. In such cases, you could only ship a “client” module on the web side that messaged a discoverable process somewhere on the cluster. I’m exploring these ideas and how they’ll play into service discovery in Phoenix, but such concerns I would say aren’t relevant to your day to day decisions when thinking about umbrellas at this stage.


Thanks, Chris!

Yes, that makes perfect sense. I think I got caught up imagining that there needed to be this process barrier between the applications to properly separate concerns, but I see now that this is silly. Processes are really only needed to hold state, and if the state is predominantly in the database, then the Ecto Repo is the stateful process. If there all small pieces of ephemeral state that can be stored in memory, I would use an Agent or GenServer for those individually as needed. Does that sound right?

Can you point me to any good resources on how to architect applications using the OTP building blocks? I know there are some Erlang books with a heavier focus on OTP, but I’m trying to put off diving into Erlang while I’m still learning Elixir.

Your Keynote was also great, Chris! I really enjoyed the deep dive into CRDTs and the alternate use cases for Phoenix PubSub like service discovery.


The main ‘problem’ I haven’t been able to wrap my head around, is where Ecto’s role (or that of any other persistence layer) lies in the ‘phoenix is not your application’ philosophy.

Of course it is possible to create applications that do not provide a persistence layer at all (As Joe Armstrong will no doubt remind you again and again :smile: ), but in many cases it provides useful because you either have more data than you can keep in memory at one time, or you only want to selectively use some of the data you previously stored, so iterating through it in a Relational database is faster than in plain Erlang/Elixir.

Phoenix integrates tightly with Ecto through phoenix_ecto. It seems that when one moves the persistence layer to its own part in an umbrella application, the advantages of Phoenix working with Ecto are lost.

In Phoenix’s philosophy as far as I’m understanding it right now, it seems like calling your database Repo directly should only be done from the controller-layer, to not couple your models tightly to the database. How would this become structured when the database-handling code is completely outside of Phoenix?



Phoenix integrates tightly with Ecto through phoenix_ecto. It seems that when one moves the persistence layer to its own part in an umbrella application, the advantages of Phoenix working with Ecto are lost.

Which part doesn’t work for you? From my limited experiments, it works pretty well so far; my “domain logic” app exposes schemas and changesets to the web layer and because I use changesets I can use them in forms - which to me is the main benefit of the phoenix_ecto package.

In Phoenix’s philosophy as far as I’m understanding it right now, it seems like calling your database Repo directly should only be done from the controller-layer, to not couple your models tightly to the database. How would this become structured when the database-handling code is completely outside of Phoenix?

To follow Chris’ example above, I think instead of calling Repo from controllers e.g. Repo.all(User), you’d get it from the another app: MyApp.Accounts.list_users(). MyApp.Accounts.list_users() would presumably call MyApp.Repo.all(User) under the hood if it happens to use Ecto for persistence.


It works very well like it is, but when I move the persistence layer to a separate umbrella application (such as inside the business logic app), is it still okay to for instance pass changesets around?

:thumbsup: This is a great example. Thank you!


RIght now we’re building a system that consists of 2 applications (one for
managing users and one for general domain stuff) that use 2 separate
databases and one phoenix application to expose an external API. It works
quite well. The phoenix application does not call Repo or does not care how
the data is distributed - all it cares about is modules similar to what
Wojtek has shown, that expose a function to load accounts/update account,


What is the best practice? Do we develop them using the --umbrella flag or we simply have them as separate OTP applications and include the dependency as Lance showed using the path param?

@wojtekmach that is exactly what I have been thinking about. Abstracting out my modules with clear separation.


I think the idea behind the presentation is great. I’m migrating an old system to Phoenix/Elixir and we are starting by migrating parts that had some performance issues. The project currently is an Umbrella project, in the umbrella there is a phoenix application and we extracted the domain logic in a separate project in the umbrella.

At first we were just calling our domain logic as a library (don’t know if that’s the correct way of saying it, we just called the other project code in our Phoenix project). Our domain logic has some database/cache queries and process payments through an API. We did some load testing and everything was fine. After some time we converted domain project to an OTP Application. We saw a degradation of performance and after some debugging we saw that the bottleneck was in the GenServer. The issue is that the GenServer process one message after another, for now we changed it back to be called just as a library.

I don’t have so much experience but I believe that, if we want to go forward with the OTP Application approach we would have to use something like Pool Boy to have more than one GenServer available to handle the load on our app. Any other suggestions is greatly appreciated. :slight_smile:

Just wanted to share my experience here so people know that if you turn your domain logic into an OTP app you might run into concurrency issues, that Phoenix gives you for free (if I’m not mistaken it runs each request in a different process), and to get suggestion from more experienced Elixir/Phoenix people on how to handle this better.

1 Like

OTP uses the term “application” in a different way than you are probably used to. I found it most useful to mentally substitute the word “component” when I was first learning it. A library application is a valid OTP application. GenServer is an abstraction you have available, but you should only use it for concurrent activities of your system. The bottleneck you found is real – don’t use GenServer like a class or an object.

I guess what I’m saying is that your library is (or can be) an OTP application. Don’t take too limited a view of what an application is. Also, it’s not magic – inappropriate design for the problem at hand can/will cause performance bottlenecks. It’s a good thing that Phoenix has you covered in managing the appropriate concurrent activities of your system. Phoenix is an OTP app itself – one that maps well to your problem.


Ah, I thought that supervision/workers were necessary to be considered OTP applications. Thanks for the clarification. This comment gave me some insight in this issue.


I’m also interested in this question.

I’m working on a personal project, and I’m at the point where I have to start building the web interface.

As I understand it, my choices are:

  1. Go Lance’s route (of developing both applications completely separately and then use the path option to include my domain logic app in my web interface app.
  2. Build an umbrella project and use the in_umbrella option.

I do not know what the tradeoffs are. I’m specifically wondering what happens to my git commit history if I go with an umbrella project. It seems that the path option would allow me to not worry about this question, but I do not know if I am just setting myself up for headaches when it comes to deployment. (Currently I’m not using GitHub or anything - just git, so I don’t want to use the git option).

Thanks folks!

The integration is the opposite of tight. :slight_smile: Phoenix defines protocols and Ecto implements those protocols, that’s literally the only thing that is happening in phoenix_ecto. So if you want to hook anything else, you should be fine.

But even though, I believe it is fine for your context to return a changeset. For example, if you have MyApp.Account.insert_user(params) and that returns {:error, changeset}, the changeset is useful even if you were not using Ecto because it contains all kinds of relevant information about your parameters and what exactly worked and what exactly didn’t. Exactly the kind of things you would yield in an error response page for an API.


Those options are largely equivalent. The :in_umbrella option is a shortcut for specifying the :path option, nothing more. I would personally choose the second option though because most likely you want to version those two things in the same repository. Otherwise you end-up in a source control management kind of hell where you always need to commit, push and sync two applications whenever you implement a feature.


Thank you @josevalim for the reply. Seems like the umbrella route makes sense.

I think the following two resources should also be included when others are trying to learn how to take Phoenix (or really any application) and split it apart into logical domains.

  1. Acme Bank GitHub Repo by @wojtekmach - besides the invaluable code examples, it includes a link to slides as well as an upcoming video.

  2. Controller Control conference talk by @Gazler.

Thank you both - very valuable work!


The talk reminds me of the pattern of Hexagonal architecture as presented way back in 2008 on aka Build your domain stuff without dependencies on concrete databases or web frameworks (or any frame works), then build adapters. Ecto and Phoenix could be used in such adapters.

Taken together with the DDD concept of Bounded Contexts - where you define what actually is your application domains, and an Ubiquitous Language - where you start naming modules, functions and files in the terminology of domain experts - this idea of separation makes a lot of sense.

I’m new to Phoenix and Elixir, but if my experience from other languages is a relevant indicator, I would say the overhead you pay in building and maintaining the framework adapter (the interface in the talk) is worth it if you have a moderately complex domain (say 15+ models) or if you need to maintain the code for more than three years. In four years you might want to replace Phoenix with the new Elixir web framework Xineohp and it shouldn’t have to be a lot of work (same goes for Otce).


New Pragprog Elixir & Phoenix book will be on sale tomorrow

Functional Web Development with Elixir, OTP, and Phoenix

Rethink the Modern Web App
by Lance Halvorsen

Well, this is going to be interesting!


@anders Completely agree with your thoughts here. I haven’t gone through your source materials (assuming Fowler is the source), but Scott Wlaschin does a great job of summarizing DDD, Bounded Contexts, and Ubiquitous Language in the context of functional programming.