On structuring and configuring a web application project in Elixir

I’m currently developing a web application and am wondering if I’m heading in the right direction.

I decided to opt for Dave Thomas’ approach, where each application resides in its own directory. And all dependencies between them is linked through path. i.e.:

# mix.exs
defp deps do
  [
    {:accounts, [path: "../accounts"]},
    {:mailer, [path: "../mailer"]},
  ]
end

And the dependencies are as follows

web
├── mailer (../mailer)
└── accounts (../accounts)
    └── db (../db)

First thing I’m unsure of, is the separation between accounts and db. As the project grows and I need a new module, i.e. products. This would be a new dependency of web, and would also be dependent on db.

web
├── mailer (../mailer)
└── accounts (../accounts)
    └── db (../db)
└── products (../products)
    └── db (../db)

One of the reasons I bought into this method of dependencies is that it strives to make a module reusable. In this case accounts and inventory would not be reusable applications on their own.

  1. Taking into account the previous statement, is this still a good approach to structure the application, or should it be made simpler?

Another issue I encountered was configuring the application. I want to test each module individually. This leads to many applications needing the same configuration.
I opted for configuring each application within its own folder, and importing it in any of the applications depending on it.

Here is an example of that: accounts loading config from db

# /accounts/config/config.exs
use Mix.Config

import_config "../../db/config/config.exs"

And

# /accounts/config/test.exs
use Mix.Config
import_config "../../db/config/test.exs"
  1. Would this be considered good practice for configuring a project like this?

Any insights will be appreciated :slight_smile:

Not everyone agrees, but I think having separate OTP applications applies when your storage or deployment lifecycle differs between the different applications. It does not offer any additional abstraction or information hiding beyond what the module system gives you.

Examples: if your mailer would almost make sense as a third-party or open-source library, or as a component that is developed by a different team in your organization.

Talking about different applications needing access to the same relational database schema I think is misguided; I don’t see a benefit to this.

What we do in our application is having multi-levels of context modules. So Account, Product are contexts, they may have sub-contexts that are more specialized but this is just represented as a normal module tree. Communication across contexts should happen at the root level. If you just look at the modules and functions involved, this is pretty much what you end up with in an umbrella project anyway.

4 Likes

I’ll agree with perhaps a bit stronger statement, having been there, that it’s a Bad Idea™

If you don’t want to take my word on that, watch just about any video on the “microservices” architecture. Parallels can be drawn between separate OTP applications and “microservices”. Proponents that architectural style are almost universal in cautioning would-be-adopters to avoid allowing the microservices to “integrate through the database”. It leads to a type of coupling that defeats the purpose of the strong separation (and replaceability) granted by having separate applications/microservices.

Actually the way I think of it: when you have one relational schema, you have one application no matter what you call the pieces of it. OTP applications don’t have to have the same tradeoffs as micro-services because they are just functions in modules that call other functions in modules. But for this same reason, I think having “separate” applications is meaningless if that is all you are doing, it just boils down to architecture of your module hierarchy. Now if you want to introduce broad-grained message schemas into the middle of this, then I’d agree with your concerns. You’ll have the drawbacks of both a monolith and a micro-service architecture. I didn’t think that is what the OP is talking about though.

I think we can agree to not delve into that(it’s been heavily discussed in this forum already), and rather discuss if my set up a good implementation of this view.

Thanks for your input. This was a red flag in my mind and why I stopped to think. I got the Idea from a blog post, but probably applied it wrong.

I’ll give som context first. The mailer module has depencies to emailing libraries, but also contains templates.

This one I’m pretty happy about being its own application, although I’m not using it in any other modules than web right now, I might want to use it in i.e. a scheduled task that sends regular mails. I think it makes sense to not couple this with either the db or web application

I checkout out what the generator in phoenix makes for context, like you described, and that makes a lot of sense. This is actually what I’m doing, only I made multiple applications instead of contexts, thanks for sending me in the right direction.

@easco your concerns are valid, although this is not what I was talking about / intending, like @jeremyjh suggested.

@kanonk IMO going as far as having the contexts in a separate app is a bit much, as the other two explained. (Although to be fair, you used paths and not sub-apps in umbrella so my point might be ill-aimed. Sorry if that’s the case.)

That being said, I usually do prefer to have my storage code and config in a separate app (storage) and make higher-level modules and functions dealing with retrieving and modifying state, in yet another app (domain). Then you can have contexts in your Phoenix / Absinthe / something-else apps use the domain app which will in turn use storage.

I am also somewhat disagreeing with @jeremyjh here because making separate apps inside an umbrella is my sanity check; I use it to make sure I don’t leak dependencies – but I don’t abuse it; I only use it to separate topical apps, not go the full microservices route (which at this point is well accepted to be a sub-optimal approach and I agree with that). As for this posing potential deployment hurdles then eh, I am not so sure; setting up the initial deployment is usually a pain anyway so once that’s done you rarely have to think about it again unless something fundamental in your project (or cloud provider) changes.


In short, separate apps are a good technique both in terms of semantic boundaries and also as a way to ensure no leaky dependencies. That’s how I practice it and I am pretty happy with the results so far. This is including mid- and long-term maintenance of Elixir projects by me as well.

Thanks for chiming in.

After the first discussion I settled for this kind of structure:
web
├── mailer (…/mailer)
└── domain (…/domain)

Where domain has both schemas and contexts based on schemas and
web is just an interface

I like your approach, I might do this instead!

Can you expand on this a little bit? Do you just mean that it ensures that there are literally no direct calls to a dependency such as postgrex in your domain app? The dependency is still in the call-stack though, right (not separated by messaging) ?

Yes but surely every caller must get to the callee eventually? I was not discussing a high-level SOA architecture. My issue isn’t that the call stack contains the dependency. What I am trying to prevent is the Phoenix controllers / Absinthe GraphQL endpoints directly call Ecto (or any data mapper library).

As a project scales up, you cannot guarantee you will always persist or even query your data same way as before. Having your endpoints call the domain function Card.add_line_item – which in turn calls functions that use Ecto / Redis / your homegrown cache / Mnesia / anything else – is a sane and low-effort solution to reduce friction as the project needs to evolve. Who wants to modify 50 controller functions when the time comes to introduce a caching layer or go the dual-storage route?

I mostly have mix xref in mind here – when you need to inspect your call graph before refactoring or introducing new features, having a narrower area between each pair of apps helps a lot.

But how does a separate app help ensure this is the case? You can always call transitive dependencies directly.

This is just a case of internal team culture and practices – and prominently, of code reviews and approval processes. If my endpoint apps (Phoenix controllers, GraphQL resolvers etc.) only use my domain app – via in_umbrella: true for example – and do not use the app with the direct storage helpers in it (Ecto et. al.), then that’s a very clear sign for the programmers not to take shortcuts.

Ultimately, I do not think there is a way to fully enforce clean boundaries. It’s up to us to facilitate practices that make our work more predictable and productive. The pursuit of academic purity is something I wish we had more time for but as it is, the modern IT business is heavily against it.

(EDIT: This reminds me that a pre-commit hook that makes sure your phoenix_controllers app does not call anything from your storage app – via grep’ing mix xref – is something I wanted to do for a long time.)

Then you don’t disagree with me when I said:

I mean, if you just want to keep your modules in separate directories to make it easier to enforce coding standards, you can. There is no rule they all have to be in lib. Just add them to your compilation path. What I think is more important is how those modules collaborate with each other, and making them separate “apps” imposes no restrictions on that. It doesn’t draw a bright-line around your modules, but it does introduce certain complications of sharing non-code resources and it seems like there is a new question on the forums every week about how to work-around that.

All true. It is mostly related to human perception IMO; if things are separated in “apps”, “bundles”, “module collections”, “class libraries” etc. then us the people think that bundle to be a semantically isolated unit – which is true often enough but many times it isn’t because it’s simply a device to make the life of the code organizer a bit easier.

To reiterate part of my above comments: for most devs I ever met – even the juniors and the interns – it instinctively makes sense not to breach boundaries where you have a small mechanical hurdle before doing so. Thus I try hard in my work to exploit such brain bugs (or hacks or if you will) in order to make people play nicely with my code. Saves time, saves unnecesary discussions, and works most of the time. And when it doesn’t, I can explain why I do things the way I do – in 2 minutes.

In my experience it definitely does draw the said bright line but we come from different backgrounds and societes so that’s surely a factor in it as well.

As for sharing non-code resources, that’s a problem most programming languages still haven’t solved reliably. I’d go radical and make an entirely new OS that has an OS-wide key/value store that can be a file / string / number / whatever and be it read-only 99% of the time (and have additional security attached to it, like who can even view it). I see no reason to maintain configuration or static assets the way we do it right now – deployment with such a new system would be brain-dead easy… But anyway, we can dream.

In Elixir’s case the priv/ directory IMO complicates more things than it solves. But I have no better idea so I am not bad-mouthing it. Often, it’s a good enough mechanism.