The Scalability of Macros?

Much smarter people than me value the macro concept - and from my own shallow understanding - I appreciate what I can see in Phoenix and other libraries like Absinthe etc.

However, I recently had a simple problem using this macro,

All I needed was to share it amongst tests. The tests would sometimes work with it in a shared library function file, sometimes not. Frustrating.

In the end I switched to the function,


and everything was good.

When I have a function-first API I can do well defined things. When I have macros things often seem harder to compose together, and scale.

I am making extensive use of Absinthe, and suspect with a little more time, thought and fewer macros I could more efficiently abstract my schema & type code also…

Certainly macros have their place, but they seem to chose a level of abstraction a priori, which doesn’t fit well always (especially with larger projects).

1 Like

This is a good question, and I think there’s a fair bit that could be said here. For the moment though I just want to elaborate a bit on Absinthe’s use of macros.

Whereas some macros get used like functions, the Absinthe schema macros exist largely to just build datastructures in a compile time optimized format, where doing so manually would be extremely verbose. We do actually have a set of plans for refactoring schema creation into a more data driven approach, but that data is generally not the kind you’d want to write out by hand. There is sort of a supplemental spec for writing GraphQL schemas as strings that has developed that lets you write them like:

type User {
  id: ID!
  name: String

We can already parse this into the internal structures that Absinthe works on called Blueprints, but there isn’t a way to build an actual schema with it yet. The goal then is to have BOTH strings like this as well as the schema macros build blueprint structures, which are then just ordinary data you can manipulate at will. Ultimately however no matter how that structure gets built we need to compile it into an actual Elixir module, because it gets us incredibly good key value lookup characteristics.

Long story short: Yes, the mechanics of doing macros within Absinthe does have some limitations, and we hope that some of the changes we’re planning will help with those limitations. On the other hand, because the Absinthe macros are essentially datastructure short-hand (instead of function like) we’ve really seen very few issues crop up over the last few years.


The same can be said for Plug’s router DSL that creates incredibly optimized request dispatching, or the Ecto schema DSL, whose usage allows for the incredibly slick Ecto query DSL, that allows Ecto to generate your queries at compile time instead of runtime.

I find DSLs are overused and misused in many languages, but the places where they are best used in Elixir tend to follow this pattern of describing very complicated things in elegant fashions so that they can be converted into very useful compile-time forms that can fully leverage the power of pattern matching and issue important warnings earlier than runtime variants would.

__using__ macros is another a good example of a macro use-case that consistently works well, and it’s worth noting that they too are never intended to be interfaced with like a chainable function.

Perhaps that’s good heuristic for when a macro isn’t really necessary. The macros that are intended to be called more like standard functions, like the Phoenix testing ones above, often just serve to reduce boilerplate in ways that can’t quite be accomplished normally. I think that’s a laudable goal, but even though I never have experienced the issues mentioned here with them, I often find myself preferring straightforward functional variants over macro helpers. What’s important in these use-cases is that both paths are viable, I think, like dispatch proved to be.


v good pt!

1 Like

Another really cool example of function-esque macros being used to do unique compile time work are Logger’s debug, warn, info, and error macros (combined with the :compile_time_purge_level option).

You can get the same effect just by calling the log(level, msg) function, but when you have configured your app to emit logs at higher levels, the lower macro level calls will be entirely compiled away, instead of just no-oping. This is really a glorious boon for log-heavy or performance-sensitive applications, especially in situations where your log message is non-trivial to generate, in a tight inner loop, or highly dynamic or large enough such that it has poor garbage collection characteristics.

1 Like

This is a very old thread, but I was thinking about how a more Data-Driven DLS can be more flexible, indeed powerful, despite some loss on the “slickness” of an API.

I’m not doing Elixir full-time these days, but when I play with some random exercises, I feel sometimes that it’d be much more powerful if some APIs were built with this in mind. An interesting example is what Elixir Guide state as a best practice:

# 1. data structures
import Validator
validate user, name: [length: 1..100],
               email: [matches: ~r/@/]

# 2. functions
import Validator
|> validate_length(:name, 1..100)
|> validate_matches(:email, ~r/@/)

# 3. macros + modules
defmodule MyValidator do
  use Validator
  validate_length :name, 1..100
  validate_matches :email, ~r/@/


I’m doing Clojure currently, and there is a recent movement on last years to avoid heavy usage of macros, what allows this kind of DSLs that we can find on Malli

(def Address
   [:id string?]
   [:tags [:set keyword?]]
     [:street string?]
     [:city string?]
     [:zip {:default 33100} int?]
     [:lonlat [:tuple double? double?]]]]])

One great example of a Data oriented API in Elixir is the Open API Spex

The way one operation is defined is a great example:

operation :update,
    summary: "Update user",
    parameters: [
      id: [in: :path, description: "User ID", type: :integer, example: 1001]
    request_body: {"User params", "application/json", UserParams},
    responses: [
      ok: {"User response", "application/json", UserResponse}

But seems that it’s not a widespread practice in the Elixir community.

I think the Ash framework works this way(from what I understand of conversations in the discord server).

You define your data with a macro DSL and then the framework leverages it to build a lot of stuff. Whenever you need to access to the underlying data generated by ash, you can use a set of instrospection functions, like attribute/1. This is similar to how Ecto let’s you instrospect your schema, except in Ecto you have to write your validation functions yourself, while Ash can leverage the data you provide it with to validate things automatically.

1 Like