We have written a custom code generator mix task. Working with external APIs, we wrote the same set of API provider-specific modules every time a new integration would be added.
The generator is simple/naive in its implementation, but it saves hours of repetitive copying, pasting and adjusting. It spits out the modules alongside some unit tests and prints out a to-do list of tasks that need to be performed manually.
The task itself was maybe half a day‘s work and has paid for that time investment many times over by now. (Refactoring the code base to a point where it‘s extensible in a way that is so straightforward that a codegen is even an option was naturally a bit more involved, but that is a different story )
Custom tasks are good to formalize data migrations that should run as part of a multi-step migration strategy when changing DB and Ecto schema structures in large database projects. It’s a good way to plan for, check, execute, and test data changes at every step. Then they can be moved to a history branch for posterity when the migrations succeed.
I didn’t even consider the idea of leveraging it in with respect to third party integration.
Seems it’s easy to get bogged down earlier on with connecting the dots internally, I forget that the value of almost any non-trivial program is from interaction with outside processes/APIs and need to grow my mindset to consider that potential from the beginning.
That sounds interesting, would you mind elaborating a bit on what the steps a custom task might help with would be? I assume you mean “bundling” multiple migrate, update, verify operations or is it something else?
We also have a home-grown, so to speak, schema-based multi-tenancy lib that comes with custom mix tasks for handling schema creation/migration for configured tenants (plus the public schema). The actual logic is in separate modules, so that it is callable via mix, e.g. mix app.migrate as well as the release, e.g. /app/release_binary eval 'ReleaseTasks.migrate()', so that it works locally in dev and when deploying.
The lib itself is pretty brute-force I’d say, essentially wrapping all the relevant Ecto.Repo functions, turning the :prefix option into a required parameter, but it works really well for our needs with a minimal amount of magic. Nowadays we could probably use one of the available multi-tenancy solutions, but I think it predates those and we haven’t had the need to switch yet.
As for the potential for an article… Hopefully we’ll get to that, some time. Would need to make up a messy, complex enough use case to walk through. In a nutshell: use the language constructs available - protocols, behaviours (with default, selectively overridable implementations), macros where you need them. In some cases we resorted to just generating function/test clauses in a for comprehension, iterating over some list from the app’s configuration.
As a general theme, I’m very happy with the flexibility working with Elixir/Mix affords. Building entirely custom solutions as needed almost never feels hacky, because you’re working with the tools/frameworks, not against them, if that makes sense.