Gradualizer looks indeed promising but is also not there yet, correct?
Yes that’s what I tried to imply with typespecs - seems like I have to refactor the post . The gripe I have with it is that it’s very easy to put any() anywhere (just like in typescript) and that you can simply ignore the tool completely if you don’t care. This proofs to be problematic in multi-member projects in my experience.
I’m currently thinking that it’s probably not sensible to bolt on a typesystem onto a fundamentally dynamic language like Elixir. In a past job I tried that in Clojure and it was not worth it at all.
I thought that it could be interesting to generate guarded functions based on typespecs. So that you have your “happy path” function (which will always only be invoked with the correct parameters) and of course a “bad path” function that handles wrong input.
So, a little, like an enforced Either based on the typespecs.
Not sure how well that answers the original question but I gradually adopted my own guards plus typespecs, opting for any() if Dialyzer proves too stubborn.
defguard is_opts(x) when is_list(x)
defguard is_id(x) when is_integer(x)
And just use those in my modules. Furthermore, I am trying my best to enforce those during PR reviews.
IMO Erlang/Elixir are far too behind the real static typing and trying to bolt it on results in a lot of headache. Using pattern-matching and guards in the function heads has so far served me well enough in terms of invested efforts vs. errors caught before production data gets corrupted.
Norm helped me gain confidence in the Elixir code I was writing, but it was a lot of extra work. I also introduced the Witchcraft Suite for Haskell-y “dark magic” for Maybe, semigroups and functors, finding ways to use it alongside Norm. My code got a lot more expressive but I also felt I was hitting the limits of my understanding of functional languages.
So I took a break studying Haskell for two months full-time before returning to Elm. And, after a year away from it, I returned a much stronger Elm programmer. Much of the data massaging I was doing in Elixir I’m now doing in a headless Elm REPL. I’m appreciating the static type checking and excellent type-safe libraries like elm-graphql.
I’ll probably return to Elixir in the coming weeks. I’m not sure if I’m ready yet to go full-Haskell on the backend and I still want to use Phoenix for Presence, OTP supervisors, etc. I’ll keep using Norm but I’ll be limiting the amount of data processing I’m doing in Elixir to be less dependent on it. Same for Witchcraft and it’s Haskell “fan-fiction”.
In summary, yes, I recommend Norm. I found it flexible, even in the library’s early days. I haven’t been following it’s development closely in recent months, so I’m curious to see how it’s matured. But the developer knows what he’s doing and judging from his podcasts is fully aware of Elixir’s strengths and weaknesses.
FWIW I’ve been using Elixir in production for almost 3 years and I can’t remember having a type-related bug. We do write automated tests for the functionalities of our apps, without thinking about types. Do Elm programmers avoid automated tests entirely?
To me this doesn’t seem to be an issue. Getting supervision trees right or avoiding race conditions are the challenges I’ve been facing, not typing issues. Maybe I don’t know what I’m missing.
Both Elm and Elixir are known for “developer happiness” but they achieve that in different ways. I use both (hopefully for the right problems), but when I do so I’m wearing different hats with, I suspect, different brainwave patterns if my headwear of the day could measure them.
With Elm, it’s true that “type-driven development” is probably practiced as much as (or more than) test-driven development. But testing is a first-class citizen in Elm, with excellent testing libraries, and the fact that functions are always pure (without side effects) makes code particularly easy to test.
What do I mean by type-driven development?
The type system of Elm is actually quite limited when compared to Haskell (which has its own limitations, and has influenced other functional languages with more descriptive type systems) but it’s more expressive than Elixir. Like Elixir, programming in it is very much about transforming data by pipelining it through functions.
When programming in Elixir, one is primarily modeling data with product types (structs or “records”). In Elm, equally important (and complementary) are sum types (aka tagged unions). Algebraic data types in short. One can introduce sum types in Elixir with third-party libraries such as the “Haskell fan-fiction” Witchcraft suite.
With this descriptive power, Elm programmers put a lot of focus on getting the types right (not only of data, but also of functions), of “making impossible states impossible”, and then trusting the compiler to guide them. And because of this type-driven safety net, one can do wild refactoring in confidence, even without tests.
While there are (gaping) holes in the theory, there is some truth in the saying “if it compiles, it works”. When programming in Elm (and Haskell), you juggle less mental baggage in your head because the compiler - and its type inference engine - has your back because the contracts in your code are enforced.
As such, it’s common in the life of an Elm program for one to make radical changes to one’s types (and module structure), and to be able to confidently fix all dependent code by just following the type errors. And the Elm compiler has excellent error messages that help pinpoint the where, why and what of changes that need to be made.
I don’t need to explain on this forum the real joy and multiple sources of “developer happiness” when reading and writing Elixir code. But too many of my brain cells are doing the work that I would expect a compiler to do for me. It sometimes feels that only half my brain can focus on the problem at hand, while the other halve is unnecessarily juggling ten balls.
Or, in another analogy, when writing a particularly gnarly piece of data massaging in Elixir it can like a SWAT team is loudly breaking through the door while I need a little bit of quiet please to break into the safe. In Elm, that SWAT team is working for me and keeping people out, and have hired a talented pianist playing Debussy.
So, yes, I’ve tried to make type-driven development more of thing for me in Elixir with Norm (although contracts are only enforced at run time) and with the Witchcraft Suite (which opens up even more type-power than with Elm), but it brings with it a lot of boilerplate, and without a stricter compiler one can still feel that SWAT team breathing down one’s neck.
In Elixir, feel I did get to write more expressive and more ambitious code when using the safety net of Norm and the expressive power of Witchcraft. Yes, maybe that suggests that I’m not in the 1% (or even 10%) of top-tier Elixir programmers, but the point is that I don’t need to be to make progress. A test-driven approach may have resulted in good code, but it would have been a different developer experience, and on the off ramp of the highway of developer happiness in Elixir.
On a podcast, Dave Thomas recently discussed his lament that the Elixir programming language is considered “finished”, because it isn’t exploring possible futures (and reimagining of choices it made). He explored what Elixir might look like if it took more inspiration from other functional languages (and how that would empower Phoenix pipelines, etc). José has given his response on this forum to some of Dave’s arguments.
But getting back some of the original questions above, yes, I believe the use of Norm (and Witchcraft) can make one a better Elixir programmer, and make one’s code more expressive and better documented. But they are limited to decisions made by the language design (in part because of Erlang and BEAM), which is not to say that those decisions are wrong. (See arguments that Elixir is better described as a “concurrent programming language” than as a “functional programming language”.)
Actually quite the opposite: with a decent type system, like for example in Elm or even Rust, you’re actually writing a ton of automated tests in the form of type annotations.
The additional unit tests you’re writing are hence seldomly concerned with the shape of the data.
Actually, in my experience at least, this is a problem you’re facing on a daily basis:
The vendor API changed? Now I’ll have to refactor each function that uses the response of that endpoint.
We need to display more data in this table? Ok I’ll better check that I don’t access this possibly empty thing at runtime.
You realize half way through your prototype that you had better used a Set instead of a List? Start looking for each function that accesses this data
Of course: unit tests help with this since they might crash due to wrongly shaped data, but a really good type system will simply be more thorough and point you to each and every function that is not following the spec.
I think my own take on this problem when writing Elixir will be the following:
At the edges of the application I’ll use tons of guard clauses and pattern matching to ensure that no garbage data makes it into the system. This should be accompanied by fuzz tests.
Inside the application (i. e. everything under my direct control) I’ll use type specs and try to express data types as Structs as much as possible and then use Gradualixir to type check the application at the CI-level
Maybe Norm or Witchcraft (and it’s descendents) could help with point 1.
Thanks again for your opinions and experiences. It’s much appreciated!
I tried a bit of Haskell years ago but got disappointed by conceptual inconsistencies and unreliable tooling. It just didn’t seem to work very well.
In Elixir/Erlang things just work. I’ve had issues with third-party libraries but with the core libraries and tooling all issues I’ve had where from my own ignorance and I could fix them after reasonable investigations.
Like any curious programmer I’ve read stories about how types are great but if you can’t have basic stuff working it seems to lack demonstrable practicality.
Maybe it got better since or with more perseverance I could have worked around these limitations at the time. But all these hours spent learning foreign concepts didn’t seem to pay off.
For what it’s worth, I gave up on Haskell after the second weekend. Too much pain in what should be dead-easy tooling (Elixir’s mix is a perfect example for a good and easy tool). Not to mention all the different string types. The whole thing is kind of in your way instead of helping you.
Moved to Rust and haven’t regretted since.
I still like OCaml more but I’ll revisit it after they add the multicore support.
Making this easier is one of the main driving forces behind the TypeCheck library (that I am building; shameless self-promition). The idea is that it creates function contracts (and documentation and property-test generators) from your typespecs, so they are now actually enforced.