Any downsides to using the same table for multiple contexts?

I’ve really enjoyed using contexts to organize and protect my business logic from the web app. I’m attempting to more fully decouple my contexts though and wanted to check with others to make sure I’m not going to down a bad path.

I’m building an app to manage shared office suite buildings. There are more contexts than this, but let’s focus on the following four:

Accounts - Users and Profiles, Authentication
CompanyMgmt - Companies can lease multiple spaces in our buildings and a company can have multiple users
Inventory - Locations, Offices, Conference Rooms, Spaces (like first floor), office types (salon, focus, coworking, etc.)
Calendar - Reservations - Members can reserve conference rooms

I’ve previously tried to keep these somewhat decoupled by allowing for cross-context belongs_to relationships, but no has_one/has_many. I’d like to remove all cross-context relationships.

Let’s take the “Calendar” context as an example since it is intertwined with a lot of other contexts.

There is a calendar for each of our building’s locations. Any company with the lease of a suite in that building gets 20/hrs/mo of free time to reserve space. If they have multiple leases, they’ll get more time.

Each location has about four conference rooms and depending upon the size, different rooms may count more against allotted monthly hours.

Companies have multiple users and the time reserved by the users is pooled together against the allotment.

All in all, there are the following cross-context dependencies in the “Calendar” context:

Inventory.Room - Rooms are pulled directly from the Inventory context. Reservations have a belongs_to room association.
Accounts.User - Reservations have a belongs_to user association.
Inventory.Location - Each location has a calendar. These locations are directly from the Inventory context.
CompanyMgmt.Company - There is no direct association between companies and reservations, but users have reservations and companies have users. I need to let users know how many hours their company has left and reporting on usage by company.

To fully realize the benefits of decoupled contexts, I originally thought that I would need to have a separate table for each of calendar users, rooms, locations, and companies. This would be much like the concept of an “author” record in a blog. This seemed like a ton of work. However, I realized that I can just create separate schemas for each of these entities in the Calendar context using the same table as the schemas in the other contexts. I can even have the exact same associations, only they now associate to the Calendar-specific schemas. This means that none of my queries (including preloads, joins, etc.) needed to change at all.

This allows me to remove a few calendar-specific attributes from other contexts.

This brings me to my questions:

  1. All in all, this was an extremely simple refactoring to the point that I’m wondering if it is so easy that I am missing issues this might cause me down the road. Are there problems this might cause me in the future?
  2. Is this the “go to” way to handle this sort of thing in cases where “all” of the records for a given table are relevant to a given context.
  3. I’ve previously tried to avoid throwing a bunch of extra columns on to a given table for different use cases. For example, if there were 50 different company settings for different scenarios (like calendar usage), I would probably break those into different tables to, if nothing else, avoid accidentally loading all of those settings up when grabbing a company record. However, different schemas ensure that only the specified attributes are ever selected. Thus, there doesn’t seem to be much of a downside to dumping a bunch of columns into one table. Any disagreements on that point?
14 Likes

Hey!

I was thinking about this problem recently, and I concluded that there is no “good” solution. However, there are different tradeoffs.

My problem with Phoenix Contexts, in general, is that they reuse schemas as application data. E.g. if you create a context via generator like this:

mix phx.gen.html Accounts User users name:string age:integer address:string

This command generates templates, view, controller, context and schema. The %User{} record is present in all those places. Let’s say that users are central in our application, and many other contexts use them somehow. All of those have different schemas that use some subset of those fields. E.g. SnailMailDelivery context uses only User.address.

Suddenly, there is a requirement that makes you split the address into street, number, city and zip code. You have to change the Accounts context and SnailMailDelivery context and every other context that used address field.

In your question, you asked if adding context-specific fields to Calendar is OK. I think it is perfectly OK. The problem is with data shared between contexts.

Another problem might be if, for some reason, you decide to refactor Inventory into a polymorphic association to use with Calendar. Suddenly, you have a sweeping change through almost the entire project. Any relation or column that you use in two contexts (maybe except ids) is a potential problem. I still think it is an OK solution because it is simple, and contexts give an excellent high-level overview of your application. It is one of the suggested solution in the official docs https://hexdocs.pm/phoenix/contexts.html#cross-context-dependencies along with creating micro tables with 1:1 relations.

I believe the problem stems from the fact that the DB is just a big global variable, and no amount of structure can change that. The solutions I came across are:

  1. Use event sourcing and have separate storage for each context

That is a purist solution and requires lots of setup, so I’ve never tried it. SQL databases give me ACID, and from what I understand in Event Sourcing solutions, you have to work hard with sagas and what-not to get similar properties. (don’t quote me on that, I might not have studied it hard enough)

  1. Try to treat your database as an after-thought in your project

If I needed to summarise “Designing Elixir Systems with OTP” by Bruce and James, it would be “Code without a database and then add it later”. In the book, they go to great lengths to avoid touching database. When the project is ready and functional in memory, only then they make a separate OTP app with DB schemas and pull it as a “poncho” dependency. For me, it was overkill. I understand that database schemas tend to “spill” into your project resulting in hardcore refactors for a schema change. But that was a lot of config just to abstract the DB.

  1. Use Service/IO/Model (Service/DA/Core)

My favourite approach is explained in a talk by Rafał Studnicki https://www.youtube.com/watch?v=XGeK9q6yjsg He splits every contexts into three parts: pure “model” (in Dave’s book it is “functional core”), then “Data Access” layer with schemas, and in the service layer that uses the two.

The idea is that in the core you have pure Elixir data structure %User{} (not a schema). In DA, you have the schema, but you keep it private. E.g. Accounts.DA.get uses the schema internally but returns Accounts.Core.User record.

The downside is that instead of DRY, you now have to use WET (Write Everything Twice :slight_smile:). But now you can safely refactor schemas. If you refactor address into four separate fields, you need to refactor DA layer in all contexts that use the %User schema but only that.

I think this has the minimum overhead for maximum gain approach. Repeating schema once more is not a big deal. It is already repeated in form, schema, changeset and migration. Adding fifth place for making future refactors a little bit easier is worth it in my opinion.

Rafał also talks about using Dialyzer to help you keep structs private to contexts. Highly recommend watching it :slight_smile:

15 Likes

Thank you for the detailed response and I can see how shared fields could be problematic. I appreciate the information you provided on decoupling the model/core data from the data access layer. I watched the video and it seems like a nice architecture, but I’m having trouble envisioning how that actually works in practice.

I’ve tried to take your address example and turn it into some ugly, contrived pseudo-code. Imagine that the users table originally had a single column for address and later had to be broken into four columns. The data access layer (IO) would have been updated to accommodate the changes, while the Model (and app code using it) would have hopefully stayed the same.

I’ve taken some massive liberties in terms of what an address looks like and dumped all the orchestration into the update_user function even though some of that would need to move into IO.User (especially the Repo interaction). That video returns maps while my example would return model structs and changesets. I’m not sure I’d be ready to give those up.

The basic concept I went with is to leverage schemas and changesets for both the Model and data access layer (IO). If you can make any sense of this, is this the general pattern? I’m likely over-complicating; are there any libraries or methods that make this simple?

defmodule MyApp.Accounts.Model.User do
  use MyApp.Schema

  embedded_schema do
    field :name, :string
    field :email, :string
    field :address, :string
    timestamps()
  end

  def changeset(user, attrs) do
    user
    |> cast(attrs, [:name, :email, :address])
    |> validate_required([:name, :email])
  end
end

defmodule MyApp.Accounts.IO.User do
  use MyApp.Schema
  alias MyApp.Accounts.Model

  schema "users" do
    field :name, :string
    field :email, :string
    field :address_1, :string
    field :city, :string
    field :state, :string
    field :zip, :string
    timestamps()
  end

  # Convert an IO User to a User model
  def to_model(user) do
    %Model.User{
      name: user.name,
      email: user.email,
      address: Enum.join([user.address_1, user.city, user.state, user.zip], ", "),
      inserted_at: user.inserted_at,
      updated_at: user.updated_at
    }
  end

  # Convert a User model to an IO user
  def from_model(model) do
    [address_1, city, state, zip] = build_address(model.address)

    %__MODULE__{
      name: model.name,
      email: model.email,
      address_1: address_1,
      city: city,
      state: state,
      zip: zip,
      inserted_at: model.inserted_at,
      updated_at: model.updated_at
    }
  end

  defp build_address(model_address) do
    Enum.split(model_address, ",")
  end

  def changeset(user, attrs) do
    user
    |> cast(attrs, [:name, :email, :address_1, :city, :state, :zip])
    |> unique_constraint(:email)
  end
end

defmodule MyApp.Accounts do
  alias MyApp.Accounts.{Model, IO}
  alias MyApp.Repo

  def update_user(%Model.User{} = user, attrs) do
    # Get a User model changeset
    model_changeset = Model.User.changeset(user, attrs)

    # Attempt to apply the User model changes
    case Ecto.Changeset.apply_action(:update) do
      {:ok, updated_model} ->

        # Convert the updated User model to an IO User and get all the attributes
        changes =
          updated_model
          |> IO.User.from_model()
          |> Map.from_struct()

        # Take the original User model, convert it to an IO User, and run the
        # IO User changeset. The changes will be all attributes, but it won't matter
        # because changesets already know which attributes are actually different.
        # Attempt to update in the Repo.
        case user
             |> IO.User.from_model()
             |> IO.User.changeset(changes)
             |> Repo.update() do
          {:ok, updated_io_user} ->
            # Convert the updated IO User to a User model
            {:ok, IO.User.to_model(updated_io_user)}

          {:error, io_changeset} ->
            # Return the original User model changeset, but grab the errors
            # from the IO changeset. This would likely only be a constraint
            # error. Manually set the action to update.
            {
              :error,
              model_changeset
              |> Map.put(:errors, io_changeset.errors)
              |> Map.put(:action, :update)
            }
        end

      err ->
        err
    end
  end
end
1 Like

Maybe not much simpler but at least a little :slight_smile:

  1. There should always be one context that is responsible for updating a given schema. That means that during a refactor address -> granular address the context that updates user would change the model too. So in this case Accounts.Model.User would change to those granular fields.

However, other contexts that only read and display user information would need only one change in the to_model. E.g. Billing context would only use that address as a whole and never change it.

With that in mind, you should never need the awkward from_model with splitting address by comma or translating changesets with different sets of fields.

  1. I am still figuring out how to fit changesets in the equation, but I understand this:

a) schema changeset should be hidden in MyApp.Accounts.IO.User
b) MyApp.Accounts.Model.User.new can return {:ok, %User{}} | {:error, changeset}. That changeset should have nothing to do with the database, only offline checks.

That model changeset in b) should be used for creating Phoenix form. I am still figuring out the details of this, but I came up with something like this:

defmodule MyApp.Accounts do
  alias MyApp.Accounts.Model.User
  alias MyApp.Accounts.IO

  def update_user(user_id, params) do
    with {:ok, old_user} <- IO.User.get(user_id), #returns Model.User not schema
         {:ok, new_user} <- User.new(Map.merge(User.to_map(old_user, params),
         {:ok, new_user} <- IO.User.update(old_user, new_user) do
       {:ok, new_user}
     else
       error -> error #the only type of error here is {:error, user_model_changeset}, even IO.User.update does not return schema changeset
  end
end

So changeset translation happens in IO and should be more straightforward because schemas should always have more granular fields.

  1. I am still figuring out what to do on new and edit actions that require the changeset. I am pretty sure I’d like to reuse Model.User changeset for building the form. Maybe something similar to change function from default Phoenix generators that returns the changeset.

One issue I had is that phoenix_ecto defines the implementation of Phoenix.Param protocol. Generated form automagically knows if it should use PUT for update or POST for create based on knowing if Changeset was loaded from the database or not.

If we have separate “model changeset” that never touches the database, we will need to pass the method by hand. It isn’t a big deal but another small annoyance when trying to fit this approach.

3 Likes

Is this a core rule of this pattern? This seems like a good thing to do and certainly would simplify things. However, I’m just not sure I can always stick to that. Let’s say you have one context in which users are created with an address via an admin area. However, a separate profile context allows the owner of that address to update their own address. I don’t know. Perhaps that context should reach out to the original context for that update. Or perhaps if it really does need to make writes, you have to update the associated code for that specific context as well.

One difference between how you and the video author are doing it is to pass an ID rather than a struct to the update_user function. For me to do that would require a fairly large change across my codebase. Do you think you lose a lot by passing a struct as opposed to an ID? I like being able to query for the struct in the controller, authorize, and then pass it to the given context function.

1 Like

You did a great and an extensive analysis in your post, but this is the part I disagree with. Those four places may already seem like a lot, at least to people just getting started or people writing smaller systems. Adding a fifth place (possibly even 6th if you want to add specs, which I believe you should) is going to make things quite tedious. To be clear, I don’t think this is a bad approach, but I feel the effort only becomes worth it in larger codebases.

The gist of the tradeoff is expressed here:

Adding fifth place for making future refactors a little bit easier

You seem to argue we should overcomplicate the design today to make some possible future change easy. This seems like a classical case of YAGNI. Martin Fowler did a great treatment of the topicin this article. The gist of it is:

a) You’re betting on one of the many possible futures. Chances are slim that you’re right.
b) Even if you’re correct, a lot of effort will be spent maintaining an overly complex design before the future change is actually needed.

For more discussion of the topic I also recommend Is Design Dead by the same author.

To answer the original question - I believe that it’s fine for multiple contexts to use the same tables. In my view, Phoenix contexts are not DDD bounded contexts. Bounded contexts are used in much larger domains, and my impression is that people typically use different databases or at least completely different db tables. That’s a lot of ceremony, and you need to have a large enough use-case to justify it. My impression from the original post is that the domain is nowhere near as large to justify such ceremony. As usual, there’s no one size that fits all scenarios.

When it comes to Ecto schemas, I think that it’s fine to use them across contexts. Schema is part of the context interface (after all, we return schemas to Web), so I see no reason why one context shouldn’t use schemas from another one. TBH, I don’t really think that schemas are owned by a particular context anyway. They are context-level entities, but not necessarily tied to some particular context.

However, I do agree with this:

I wouldn’t put it so strongly, but I do agree that a case of two contexts updating the same schema is a likely design issue, for example that things which belong together are separated. Which leads me to the following question:

I’m not completely sure I understood the problem, so let me paraphrase it. Let’s say that we must support the following scenarios:

  1. A user can update their own profile
  2. An admin can update anyone’s profile

The way I’d model this is via a single function, e.g. Profile.update(user_id, updater_id, data_to_update). Both admin UI and user UI would invoke this functionality. The domain rules related to profile would be encoded in the single place. When you want to understand how a profile can be updated this would be the single source of truth.

This would also mean that I’d have no admin context. Admin UI is a UI concern, and UI is just a view of the domain, but it’s not the domain itself. Both UI and domain are driven by the current requirements, but that doesn’t mean that the domain should map exactly to the way UI is organized.

17 Likes

I’m interested in additional ways I might decouple the contexts in my app. I’ve felt the pain of highly coupled code in large Rails codebases where it is impossible to know all the places a seemingly simple change might impact far flung areas of the app (well, automated tests, of course).

That being said, I want to be practical about it. Breaking into contexts and this latest change of using context-specific schemas sharing tables seems like a big win for very little cost. I’m certainly not there yet with separating into Core/Model and Data Access Layers.

You’re right, my app in its current state is not that large and might never even approach the size of some other apps I’ve worked on. I have the following contexts: Accounts, Blogging, Calendar, CMS, CompanyMgmt, FAQ, Inventory, Messaging, and Networking. There will likely be CRM and Billing contexts that will be larger and more complex than any that currently exist. These have felt like the natural slices of my app to break into separate contexts, but time will tell. As with most projects, there are a lot of unknowns on what is developed in the future, but I’d like to be on a good footing to avoid that feeling of an app that is perilous to maintain.

I think this is a rule I can live by. My example was poor because I don’t have a real example of needing to update any piece of data from multiple contexts anyway. My contexts are not broken up between admin and user UI.

I’ve asked / pushed back against passing IDs (as opposed to structs) to these functions a couple times, but I guess a benefit of passing an ID is that the function doesn’t have to care if it is for a Calendar.User or an Accounts.User.

1 Like

I used ids here mostly b/c I assume the client is the web tier which doesn’t have the user struct, only the id. If the client code has the struct, I’d make the API accept the struct (it should probably be done for the second updater, b/c web likely already obtained the user struct through auth).

This decision process could lead to some inconsistencies (some funs accept ids, other structs), but I’m personally not to worried about it. If it bothers you, you can always require the client to pass the struct, but then they’ll have to make two API calls, something like:

with {:ok, user} <- Profile.fetch(user_id), do: Profile.update(...)

My personal opinion is that a design which is simpler is more open to any possible future change. When that change arrives, if it doesn’t fit into the current design, the code should be refactored (basically - make the change easy, then make the easy change). This simplicity is obtained by making the design reflect the present, not some possible future.

So I like to split things based on what the current code does, not on what it might do. In practice, this means that I’ll actually combine some seemingly unrelated things in the same context module (i.e. I’ll avoid splitting things too early). Once the module grows large enough, I’ll have better insight to identify the distinct logical groups.

I recommend reading the Fowler articles I linked in my previous post, because they provide an excellent and an extensive treatment of this approach.

6 Likes

Let me chime in with a question: how thin are your controllers really?

I’ve found out that there were some pretty involved with statements in my controllers so I’ve decided to push that to a separate context. As a result, the controller is just one line call into the domain and handling of different return values, which sounds about right since I want the controller to just translate HTTP to domain and back.

But as a result a new context emerged - a context that’s tailored for the specific controller. Since that controller is not tied to a single schema, but a use case, so is the context. Which means that it depends on other, simpler contexts (like Accounts, Schedule, etc.). What do you think about this approach? Do you happen to have similar things (more complex contexts; contexts dependant on other contexts) in your codebase?

That is valuable insight! I also try not to overcomplicate stuff for the sake of some future benefits because the future might never come. I focused on the ability to refactor DB structure because that bit me hard one time and because of the original question of @baldwindavid concerned schemas.

However, I believe that approach with decoupled model and schema has many more immediate benefits for anything that is even slightly more involved than basic CRUD.

  1. Since models are “functional core” and IO + service are “imperative shell”, it is often easier to find business logic without looking through lots of “infrastructure code” like DB setup. Your model is your business logic.
  2. I’ve found it easier to test in separation. Ecto with sandbox gives excellent tools to test with your database, so I often test only service code. However, if there are other interactions like hitting APIs or parsing files, separate IO layer makes it much easier to mock stuff. It also makes it clear where to configure dependency injection. The service layer is the place that decides which IO to use: real API or mock.
    Lots of people struggle to find the right place for this type of config. Should I pass the Repo module as a function argument? (can get tedious fast if there is DB + API + anything else). Should I use config.ex? That adds bloat to configs and keeps the config far from the code. Separate service creates that perfect spot for DI config.
  3. It makes me think more about the data model and use proper data structures.

The last point needs more explaining and deserves and example. In one of our banking apps, we have the following schemas: %Payment{id, requested_amount} and %PaymentEvent{payment_id, type, amount}. Those schemas abstract away different payment methods, so there are many types of events. E.g. success means we got the money, refusal means something went wrong, chargeback means we got the money, but client reversed the transaction. Some providers even allow double payments so we get the money twice and need to reconcile it ourselves!

We started with pure schemas and calculated the amount we have on our account from Payment with preloaded PaymentEvents. But operating on nested data structures gets tedious quickly. We introduced a separate data structure %Payment{requested_amount, status (calculated from events), balance (also calculated from events)}. If we started with model/IO/service architecture, we would probably start with something like that because we would think about schemas after making the business logic in the simplest way possible.

And there are lots of cases like that. Maybe you have a user with a preloaded list of blog posts, but you don’t care about the order? You want to get posts by title quickly, and you would use a map instead of a list? By starting with functional core and worrying about the DB later, you make the most valuable business logic code as simple as you possibly can. The tradeoff like in any architecture is that you need to push the complexity somewhere: e.g. IO layer will be responsible for translating models to schemas and back.

I don’t claim that this is the right choice for everything :slight_smile: Blog engine or any other CRUD is simpler when we pass the schemas everywhere directly.

Apart from that one quote I fully agree with everything Saša said. Better to keep things together until you know you need to break them.

3 Likes

I find half of this approach good and the other half really bad :stuck_out_tongue:

The idea to have thin controllers is great. Contexts are great because you think about the user interface. You should be able to execute your business logic as a series of function calls in iex. It works great when you do seeding or try to set up some things on staging for later tests. I wouldn’t even hesitate to use this API in seeds. So I am all for thin controllers!

However, contexts should not depend on each other. Contexts should need to be separated so that when you update one, all others should be blissfully oblivious to that change. Contexts depending on each other defeats the decoupling.

If you need two contexts to do some job, those are probably one context.

In the scheduling example, I believe you don’t need two contexts. If you want to schedule something for a user, do you need his name? Probably don’t. Unless you have business rules like “anyone named Andrew can’t book on Thursdays”. So you only need to identify which user does booking. You can pass only an id.

There might be the case where you really need the user in scheduling. Maybe there is a party only for adults. You can pass user.id and user.age to scheduling context but that starts to seem smelly. There might be more things you need later. In that case, you still don’t use two contexts. You duplicate the schema with only necessary fields in scheduling context (as in original question of this thread).

Since only one context should be responsible for writing data, create and update should almost always be one function call + handling errors.

index and show can often read stuff from multiple contexts to display it nicely for the user. Showing user.name next to schedule sounds like a good idea :smiley: So those would be two separate calls in the show action.

Yes, I think we’re in full agreement. My current clients build these types of systems. I mean there’s always some more logic than just store data and display it, but nothing as involved as e.g. a banking system. We actually briefly deliberated about pure model data structures, but they didn’t seem worth it for us.

I absolutely agree that in more complex cases using the underlying storage view for domain model won’t be optimal. I mean, you can always hack around it, but this will bring needless complexity in the app code. So, yeah, in such cases having a pure domain model decoupled from internal db representation is definitely a good choice.

To be clear, this isn’t a shortcoming of Ecto, but rather it’s property. Ecto makes IMO a great choice to keep schemas very close to the underlying db representation, which makes things simpler in the beginning, and is sufficient for smaller domains. For anything else we need to do our own work, and I don’t think that any library/framework can help us here.

So I just wanted to point out that starting with the pure model IMO is an overkill for many projects, and also that such approach needlessly raises the bar for the people who are just starting to use the technology.

3 Likes

My users come to me with a token, not an id, so I do need to fetch the user. I guess I could use a plug here, but I’m not convinced. Also the authorization requiries querying across schemas, so I don’t see how I could stay in one context. Having a slimmed down duplicate of the schema sounds like an interesting idea to try out.

index and show is when things get murky. My controller asks for quite some data, so the dependencies are there - in the controller. I end up with an action that calls 5 functions. I think it crosses the line of being “thin”. Moving this out to a new context just makes it easier to test. But that doesn’t seem to add more coupling.

Amen! Ecto is fantastic! It makes easy things easy and hard things still doable!

3 Likes

Understood. I usually have a struct because I have grabbed it to authorize.

They are generally pretty thin, I think. In a lot of admin areas they are authorized at router level, so it is a simple case statement.

Other areas have per action authorization which adds a with. Anything more intensive typically takes place in whatever context function is being called. Some of these functions within the context end up as big multis. In a few cases, a user is passed to the context function for authorization. That is typically to use a different changeset based upon user permissions.

I think as I refactor a bit, I will likely end up with some cross-context, um, contexts. The Phoenix docs talk about this a bit at the end… Contexts — Phoenix v1.7.10

I appreciate the information and can experiment and learn even if I’m not going to use it right away. It is good to have that as a potential route when I encounter a situation where the defaults aren’t quite cutting it.

To be clear though, I am absolutely duplicating the user name in a Calendar.User schema. I will not be planning to write the user name from the Calendar context, but will definitely read from it. Perhaps that right there is one of the issues with using the same table with multiple schemas though. If you can/should only update a given column/field from a single context, you don’t immediately know in a given context whether a field can be written to or should be read-only.

Anyway, to me, the beauty of the ability to use the same table for schemas in multiple schemas is that I can more easily reason about in whatever context I’m currently in. Quickly skimming the schemas within the Calendar context shows most of the data used in the context. Context-specific authorization can be performed directly on structs within the calendar context.

It’s not to say I won’t/don’t reach out to other contexts/services when it makes sense, but contexts mostly being able to operate on their own feels pretty nice. As a side benefit, it might make it easier to break these a contexts off on its own app in the future. That is not the reason I’m doing it though. It is attempting to (maybe even wrongly) make it easier to understand and, ultimately, more maintainable.

Agreed. Regardless of the direction I’m going now, I don’t feel particularly locked in to anything. Ecto and Phoenix are both light and flexible and changing things often just comes down to changing some module names.

This is a great thread. I’ve had a lot of these questions myself and haven’t come up with answers for all of them yet.

I just wanted to chip in by sharing a link to “Building beautiful systems with Phoenix contexts… by Andrew Hao” https://youtu.be/l3VgbSgo71E which I think has some good advice around getting started with contexts, and particularly on sharing concepts between contexts starting from 22:03.

3 Likes

Thanks for the link! Somehow I’ve missed the guide on contexts and it’s full of great content (thanks @chrismccord!) . I’m actually doing the exact same thing as the UserRegistration context mentioned in the guide.