How to write Elixir code in a way that makes it easy to refactor, without emulating a type system with unit tests?

So I thought a little bit about something I wrote earlier in this topic: If you could change one thing in Elixir language, what you would change?

TL;DR; I complained about the refactorability of Elixir, compared to statically typed languages.

I am a huge fan of the Elm compiler. It helps me to figure out the correct data model for my problem domain by allowing to change everything without the fear of introducing hard to debug edge cases.

This is what I’m looking for in this question: how to write Elixir code in a way that makes it easy to refactor, without emulating a type system with unit tests?

I’d like to stay away (for now) from compile to BEAM languages, like Alpaca, Gleam, Purerl and Elchemy, since none of them feel ready to me. Also I love the Elixir ecosystem and would really like to keep it as is.

End of prologue.

I know about typespecs. But they feel a little like typescript: as soon as you start to interfere with the world, it kind of falls apart. Also they don’t provide any protection at runtime and don’t enforce totality (meaning: you have to implement each and every possible code path).

So how are you solving this problem? Generate guard clauses from your typespecs? Testing your code into oblivion? Ignore it, since you feel the added, initial productivity of a dynamic language is worth the maintainability issue?

I’m specifically interested in your opinion @OvermindDL1 :slight_smile:

(P. s. this is meant to be a discussion thread on how to improve refactorability in Elixir code, in the hope that we can extract common best practices of the community to be helpful for others, so feel free to share how you are doing it)

1 Like

have you tried dialyzer? :slightly_smiling_face:

1 Like

I’m in a similar situation/mindset, in that I’m coming from Elm and a type-driven-development approach. And I’m similarly keeping an eye on Gleam, etc.

I have a 6,000+ line Elixir codebase that is under heavy development. I spent today introducing Chris Keathley’s Norm library, which is influenced by the Eiffel’s language’s design by contract philosophy.

Norm won’t give me the same developer experience as Elm, but I think it will help me find errors more quickly and more explicitly. It’s certainly helping me think more clearly about the shape of my data. (I’m not using Ecto, just transforming data in a pipeline.)

It’s too soon for me to have a strong opinion on Norm. I’m still finding my own coding style when using the library. It’s really making me miss Elm’s “Maybe”, or rather it’s forcing me to be explicit about what data can be nil. For example, when expecting a list from a third party, I’m now explicitly transforming nil to an empty list to keep my Norm specs simpler.


Gradualizer is coming, should help it along! :grin:


Gradualizer looks indeed promising but is also not there yet, correct?

Yes that’s what I tried to imply with typespecs - seems like I have to refactor the post :wink: . The gripe I have with it is that it’s very easy to put any() anywhere (just like in typescript) and that you can simply ignore the tool completely if you don’t care. This proofs to be problematic in multi-member projects in my experience.

I’m currently thinking that it’s probably not sensible to bolt on a typesystem onto a fundamentally dynamic language like Elixir. In a past job I tried that in Clojure and it was not worth it at all.

I thought that it could be interesting to generate guarded functions based on typespecs. So that you have your “happy path” function (which will always only be invoked with the correct parameters) and of course a “bad path” function that handles wrong input. :thinking:

So, a little, like an enforced Either based on the typespecs.

Not if you have underspecs on :+1: I’ve just put up a gist with a few pratical errors dialyzer can spot for you

Dialyzer & Elixir experience is getting better over time (specially thanks to dialyxir), but there is still a lot of room for improvement for sure.


It’s already usable for a lot of tasks actually. :slightly_smiling_face:

1 Like

In my experience, adding a strict typing system to an environment where your co-developers don’t care about quality only results in messy code that typechecks.


Not sure how well that answers the original question but I gradually adopted my own guards plus typespecs, opting for any() if Dialyzer proves too stubborn.

defguard is_opts(x) when is_list(x)
defguard is_id(x) when is_integer(x)

And just use those in my modules. Furthermore, I am trying my best to enforce those during PR reviews.

IMO Erlang/Elixir are far too behind the real static typing and trying to bolt it on results in a lot of headache. Using pattern-matching and guards in the function heads has so far served me well enough in terms of invested efforts vs. errors caught before production data gets corrupted.


N. b. I’m still trying to figure this out :smiley:

@asianfilm have you formed an opinion on Norm in the meantime?

Hi @mmmrrr

Norm helped me gain confidence in the Elixir code I was writing, but it was a lot of extra work. I also introduced the Witchcraft Suite for Haskell-y “dark magic” for Maybe, semigroups and functors, finding ways to use it alongside Norm. My code got a lot more expressive but I also felt I was hitting the limits of my understanding of functional languages.

So I took a break studying Haskell for two months full-time before returning to Elm. And, after a year away from it, I returned a much stronger Elm programmer. Much of the data massaging I was doing in Elixir I’m now doing in a headless Elm REPL. I’m appreciating the static type checking and excellent type-safe libraries like elm-graphql.

I’ll probably return to Elixir in the coming weeks. I’m not sure if I’m ready yet to go full-Haskell on the backend and I still want to use Phoenix for Presence, OTP supervisors, etc. I’ll keep using Norm but I’ll be limiting the amount of data processing I’m doing in Elixir to be less dependent on it. Same for Witchcraft and it’s Haskell “fan-fiction”.

In summary, yes, I recommend Norm. I found it flexible, even in the library’s early days. I haven’t been following it’s development closely in recent months, so I’m curious to see how it’s matured. But the developer knows what he’s doing and judging from his podcasts is fully aware of Elixir’s strengths and weaknesses.


Erlang and Elixir are “compile to BEAM languages”.

Please insert the word “experimental” to get the correct meaning :smiley:

@asianfilm thanks for your field report regarding Norm and Witchcraft! :heart:

1 Like

What do you mean by that?

FWIW I’ve been using Elixir in production for almost 3 years and I can’t remember having a type-related bug. We do write automated tests for the functionalities of our apps, without thinking about types. Do Elm programmers avoid automated tests entirely?

To me this doesn’t seem to be an issue. Getting supervision trees right or avoiding race conditions are the challenges I’ve been facing, not typing issues. Maybe I don’t know what I’m missing.


I was trying to figure out how to articulate exactly that.

I think often it’s a problem people think they are going to have, but in practice it comes up less often than one would imagine.

1 Like

That is related to the original post:

I’d like to stay away (for now) from [experimental] compile to BEAM languages, like Alpaca, Gleam, Purerl and Elchemy, since none of them feel ready to me.


Both Elm and Elixir are known for “developer happiness” but they achieve that in different ways. I use both (hopefully for the right problems), but when I do so I’m wearing different hats with, I suspect, different brainwave patterns if my headwear of the day could measure them.

With Elm, it’s true that “type-driven development” is probably practiced as much as (or more than) test-driven development. But testing is a first-class citizen in Elm, with excellent testing libraries, and the fact that functions are always pure (without side effects) makes code particularly easy to test.

What do I mean by type-driven development?

The type system of Elm is actually quite limited when compared to Haskell (which has its own limitations, and has influenced other functional languages with more descriptive type systems) but it’s more expressive than Elixir. Like Elixir, programming in it is very much about transforming data by pipelining it through functions.

When programming in Elixir, one is primarily modeling data with product types (structs or “records”). In Elm, equally important (and complementary) are sum types (aka tagged unions). Algebraic data types in short. One can introduce sum types in Elixir with third-party libraries such as the “Haskell fan-fiction” Witchcraft suite.

With this descriptive power, Elm programmers put a lot of focus on getting the types right (not only of data, but also of functions), of “making impossible states impossible”, and then trusting the compiler to guide them. And because of this type-driven safety net, one can do wild refactoring in confidence, even without tests.

While there are (gaping) holes in the theory, there is some truth in the saying “if it compiles, it works”. When programming in Elm (and Haskell), you juggle less mental baggage in your head because the compiler - and its type inference engine - has your back because the contracts in your code are enforced.

As such, it’s common in the life of an Elm program for one to make radical changes to one’s types (and module structure), and to be able to confidently fix all dependent code by just following the type errors. And the Elm compiler has excellent error messages that help pinpoint the where, why and what of changes that need to be made.

I don’t need to explain on this forum the real joy and multiple sources of “developer happiness” when reading and writing Elixir code. But too many of my brain cells are doing the work that I would expect a compiler to do for me. It sometimes feels that only half my brain can focus on the problem at hand, while the other halve is unnecessarily juggling ten balls.

Or, in another analogy, when writing a particularly gnarly piece of data massaging in Elixir it can like a SWAT team is loudly breaking through the door while I need a little bit of quiet please to break into the safe. In Elm, that SWAT team is working for me and keeping people out, and have hired a talented pianist playing Debussy.

So, yes, I’ve tried to make type-driven development more of thing for me in Elixir with Norm (although contracts are only enforced at run time) and with the Witchcraft Suite (which opens up even more type-power than with Elm), but it brings with it a lot of boilerplate, and without a stricter compiler one can still feel that SWAT team breathing down one’s neck.

In Elixir, feel I did get to write more expressive and more ambitious code when using the safety net of Norm and the expressive power of Witchcraft. Yes, maybe that suggests that I’m not in the 1% (or even 10%) of top-tier Elixir programmers, but the point is that I don’t need to be to make progress. A test-driven approach may have resulted in good code, but it would have been a different developer experience, and on the off ramp of the highway of developer happiness in Elixir.

On a podcast, Dave Thomas recently discussed his lament that the Elixir programming language is considered “finished”, because it isn’t exploring possible futures (and reimagining of choices it made). He explored what Elixir might look like if it took more inspiration from other functional languages (and how that would empower Phoenix pipelines, etc). José has given his response on this forum to some of Dave’s arguments.

But getting back some of the original questions above, yes, I believe the use of Norm (and Witchcraft) can make one a better Elixir programmer, and make one’s code more expressive and better documented. But they are limited to decisions made by the language design (in part because of Erlang and BEAM), which is not to say that those decisions are wrong. (See arguments that Elixir is better described as a “concurrent programming language” than as a “functional programming language”.)

1 Like

Actually quite the opposite: with a decent type system, like for example in Elm or even Rust, you’re actually writing a ton of automated tests in the form of type annotations.

The additional unit tests you’re writing are hence seldomly concerned with the shape of the data.

Actually, in my experience at least, this is a problem you’re facing on a daily basis:

  • The vendor API changed? Now I’ll have to refactor each function that uses the response of that endpoint.
  • We need to display more data in this table? Ok I’ll better check that I don’t access this possibly empty thing at runtime.
  • You realize half way through your prototype that you had better used a Set instead of a List? Start looking for each function that accesses this data

Of course: unit tests help with this since they might crash due to wrongly shaped data, but a really good type system will simply be more thorough and point you to each and every function that is not following the spec.

N. b. I also felt a static type checker was more of a burden when I transitioned from C, C++ and Java to Ruby and JavaScript where I felt liberated. But that is because in the aforementioned languages the type checker will actually not prevent a lot of type errors (NULL pointer exception anyone? :smile:) and you might not realise that the problems you’re facing are, indeed, type problems.

1 Like

I think my own take on this problem when writing Elixir will be the following:

  1. At the edges of the application I’ll use tons of guard clauses and pattern matching to ensure that no garbage data makes it into the system. This should be accompanied by fuzz tests.
  2. Inside the application (i. e. everything under my direct control) I’ll use type specs and try to express data types as Structs as much as possible and then use Gradualixir to type check the application at the CI-level

Maybe Norm or Witchcraft (and it’s descendents) could help with point 1.

Thanks again for your opinions and experiences. It’s much appreciated!