How I could organize “optional” modules from my library?

I’m developing a Supabase client for Elixir with complete support for all their services, like:

  • storage
  • auth
  • realtime
  • UI (live view)

So there’s a “parent” module that define clients management/initialisation and then child module for each integration, like:

Supabase
Supabase.Storage
Supabase.Auth
Supabase.UI
Supabase.PostgREST

The problem is: the user can possibly needs only the Supabase and Supabase.Storage modules and not the others, for example. So how I could “activate” and “deactivate” the rest of modules? Does it makes sense to compile all children modules to only use one of them?

So I started to search some possibilities:

  1. Use adapters

Like bamboo and Ecto, they implement adapters and you choose which you want to use. but this options doesn’t seems to match my requirements as the children modules aren’t “a different implementation for the same final result”, they are completely different “applications”

  1. Conditionally compile modules

Given a config like

config :supabase, :storage, enable: true
config :supabase, :realtime, enable: true

I would define these modules as

if Application.get_env(:supabase, [:storage, :enable]) do
  defmodule Supabase.Storage do
    # ...
  end
end

if Application.get_env(:supabase, [:realtime, :enable]) do
  defmodule Supabase.Realtime do
    # ...
  end
end

The negative side is that I would need to set this if/2 on every child/helper module and also it doesn’t seems to be a very good practice for libraries/elixir applications

  1. Define “clients”/features on compile time

As Ecto do with the Ecto.Repo config, it would be necessary to define your own Storage or Realtime modules and then use their implementation like

defmodule MyApp.SupabaseStorage do
  use Supabase.Storage # , config: ...
end

But it seems kinda strange use case for API integrations and is like overusing compile time features

  1. split into multiple packages

The last one I thought is to split those children modules into separated libraries that requires the “parent” one like ex_aws does. So if you want to use Supabase.Storage you would need:

# mix.exs

defp deps do
  # …
  [{:supabase_potion, “~> x.x.x“}, {:supabase_storage, “~> x.x.x"}]
  # ...
end

The downside is to maintain multiples packages/repositories

Conclusion

So I would like to ask for the community: what it makes more sense? Which is the best alternative in your opinion? There are another options to solve this “problem”? How do you would proceed in this situation?

What specifically is the “problem”? Some of the alternatives you listed exist to address specific issues:

  • Ecto adapters address the “database adapters might use DB-specific compiled libraries” concern so that users don’t have to install code they won’t use

  • the use Ecto.Repo pattern (amongst other things) addresses the need of some configurations to only support some APIs - for instance, a repo configured as read-only won’t even have an insert function

  • the “many packages” approach addresses the gigantic scope of AWS’s APIs.

Out of the pieces you listed, Supabase.UI seems the most debatable for inclusion - especially if it depends on LiveView.

4 Likes

What I mean by problem is the issue to organize not related/different code on the same library. Indeed Supabase.UI is the most debatable one as it depends on a whole different project configuration for assets and components. But what is the division line?

As Supabase don’t have so much APIs as AWS, it seems fair to accept drawbacks from implementing all of them as a monorepo, but I would like to confirm this idea from experiences of other people.

Unless each module brings with it like 10_000 extra code lines to compile then I don’t feel that would ever be an issue. Still:

  • The Ecto.Repo makes sense for two reasons: (1) to inject only the code you need – this is done via macros so nothing gets compiled unless you use it, and (2) to conditionally include / exclude functions from the injected code depending on options you can pass to the use statement (@al2o3cr alluded to this).

  • The conditional compilation pattern is fine, I’ve seen it used in a number of projects though I will again say that I feel this is over-optimizing without having a proof that including the code unconditionally is linked to actual runtime performance or compilation time penalty. Of all your ideated approaches I would probably go for this one if you are really sure that the different parts of your Supabase API implementation project have clear code boundaries. If you are not sure of that then just don’t, there’s only huge maintenance pain down that road.

  • Splitting into multiple packages however IMO makes the most sense if you want to separate base functionality from UI (again, @al2o3cr also alluded to this), though it will make it more difficult for you to develop locally (you’ll have to have path dependencies locally in order to be able to develop with a good pace).

  • Finally, the adapters thingy is superfluous in this case unless you plan to have several competing implementations of the same thing.

To address this question of yours:

There’s not a crystal-clear one. Your best benchmark here could be “how many project dependencies I have to pull in to implement Supabase.XYZ?” – f.ex. in the UI’s case you’ll have to pull quite a lot from Phoenix so that warrants a separate project entirely IMO. This is probably the same for the real-time thing.

Ultimately, go for what makes it easier for you to maintain the entire thing. That being said, I would not use adapters at all. And the Ecto.Repo approach is under question as well unless you need to configure the injected code. Though as a counter-argument to this you do get something that’s optimal in the machine sense of the word: you only inject code that you need, so don’t take that recommendation too seriously.

So finally: I’d go for a combination of separate projects and use-able injectable code. You can do both.

But again – go for what will make your life easier.

4 Likes

Ecto.Repo has been called out a few times here and I’d want to add the other side of the coin. It’s architecture is not perfect and with downsides:

  • It needed a bunch of what could be called back paddeling to get dynamic repos, because of how much stuff is not just plain functions to be called.
  • It’s hard to document what is part of a repo and what isn’t. E.g. Ecto.Adapters.SQL functions are often not known by people to exist on their repos.
  • Part of why the system works the way it works is because it also needs to come with a whole tree of related processes for config and connection handling.

My general suggestion is therefore to first make things work without any metaprogramming – that means the functionality is available by just calling functions if possible. Then if needed and actually useful layer on metaprogramming. Going with “I do it just like ecto does with Ecto.Repo” is imo not a great idea.

4 Likes

Just curious, but wouldn’t the compiler optimise the unused code out of the compiled code, so these unused modules would only be on the dev machine in the deps folder, or am I missing something.

1 Like

How would the compiler know what unused modules are? You can always add usage of them at runtime, debugging tools might only be used on demand e.g. via the remote console. Even usage by the compiled code might be hidden due to usage of apply/3.

2 Likes

You’ve got me there. Good call.

Separating by individual packages would be the cleanest as users can pick and choose just how much of the supabase stack they need, and minimise their compile times as well

How you publish to users need not dictate how you organize your repo. For example, you could organize it through an umbrella app where you have many small packages that are published separately, and then you build up the dependency tree easily while keeping configs etc all together.

On a side note, I’m a supabase dev and would be available to help you review code or with things like placing it under the community org.

2 Likes

Thank you folks about opinions and suggestions! I will then prefer to split into different packages (maybe using an umbrella?) as supabase go-true and supabase ui will both use phoenix/plug/live view dependencies.

Thank you so much about your help! I’m now being part of the SupaSquad and I’m very excited with this opportunity! And yeah! Woudl be awesome to move the supabase potion pcakages to the official organization, that way we can easily review the code and manage issues!

Please, contact me on zoey.spessanha@zeetech.io email address or inside github: zoedsoupe