On overriding built-ins and preventing conflicts

Hello everyone to this week’s edition of ‘metaprogramming with @qqwy’.
This week I was thinking a lot about libraries that override built-in functions or macros.

This is a relative common technique that quite a few libraries use. Two common examples are:

  • Overriding def, defp, or defmodule is common amongst libraries that want to enhance function-definitions in some way.
  • Overriding builtin operators like +/2, -/2or |>/2 to enhance the kinds of data-structures that these operators allow.

However, there is a glaring problem with this technique, and that is that a library containing a definition like this:

def a + b do
  if is_fancy(a) or is_fancy(b) do
    my_custom_logic(a, b)
    Kernel.+(a, b)

can never be used together with another library that wants to enhance the same function, macro or operator.

The problem here is that we are blindly falling back to Kernel, meaning that we bypass the other library(ies) that might be in scope.

How can we fix this? Great question!

One idea that I have, is described in this gist. Summarized:

  1. Library-modules that want to override a function or macro contain a ‘default implementation’ that looks up what earlier implementation was in scope and call that one. This default implementation can be automatically injected and annotated with defoverridable allowing a library-implementer to simply call super(...) whenever they want to fall back to the default implementation that is in scope.
  2. Library-modules add a snippet to their own __using__ macro that will hide the conflicting implementations that are in scope which would conflict with their implementations and sets up that this now-hidden implementation is what should be called whenever a fallback is triggered.

In the end we then end up with something that from the user’s perspective looks like this:

defmodule Example do
  use OverrideExample1
  use OverrideExample2
  @a 1
  @b 2

Here, both OverrideExample1 and OverrideExample2 have overridden the @ operator macro. Since they use SafeOverride, when the @ macro is called inside Example at compile-time, OverrideExample2's implementation is used, which will fall back to OverrideExample1's implementation, which will fall back to the Kernel implementation.

Of course, this is just a single idea.
I’d love to talk about this and hear your opinions! :smiley:


This is awesome.

I would like to illustrate this challenge with a library that I recently developed: ex_debugger. It hijacks def/defp macros in ones code base in order to annotate the AST at various junctures(start/end of a definition as well as branches in case, if, cond expressions etc.) with debugging expressions that can be toggled on/off.

I would not see how I could leverage any of the two suggested techniques to avoid the problem highlighted in this discussion which in turn reduces the value that a library like mine can offer.

I belief that addressing this issue requires an additional feature in the language and I would love to hear from the community what would make sense. One potential proposal could be to introduce: defhook/defphook(def_heading_ast, def_do_block_ast) that would each return a tuple: {updated_def_heading_ast, updated_def_do_block_ast} and that the standard library effectively uses as input for the actual def/defp-macros. When multiple libraries implement defhook/defphook, then the expected behaviour would be that they queue up so that the output of one hook would constitute input of the next one.

1 Like

I am still hoping to hear more opinions on this matter, because I believe that this is an important problem that requires a solution :slight_smile:

1 Like