I know that this is a very vague question, but from what we already know about the set-theoretic types that will, likely, land in Elixir, do you think it will be enough to bring the same benefits that bring devs to OCaml, Haskell, and Scala?
I’ve read the set-theoretic types paper and a few things are not clear to me. I understand the constraints and how it will be added at the language, but I don’t understand if it will have enough to fulfil the desires of those seeking for a strong static typing FP language. I can name a few:
Absolutely not. Nothing can compare to Rust / OCaml / Haskell compiler errors when you have an ADT and you forget to pattern-match one of its variants. And no, dialyzer is not that at all.
But it’s going to be an improvement for sure and I have already benefited from the new compiler warnings. Quite so in fact.
I struggled with rust, but the one thing I took away from my experience was how awesome ADT and the compiler helping you to program are. I wish i could have close or similar experience with elixir, cause I really like the beam but hate I cant do typed wizardry when trying to define invariants
Do you mind to expand on this take? I would like to hear more about it. I do like Elixir a lot and I have some Scala friends, and when we talk about programming languages, they acknowledge how awesome BEAM and OTP are, but they also say that the type system in Scala is just next level and allows them to do so many more things, and this is the main reason they don’t try/use Elixir. These are the kind of things that are still unclear in my head.
I agree with @dimitarvp, elixir is nowhere near the trust that the Rust compiler gives while ensuring your code is correct. In fact, with the $ annotations in elixir’s emerging type system, there is danger of not even being able to approach the trust that TypeScript gives, much less Rust Kotlin Swift or other type safe languages.
I am not sure what this is referring to. Our type system will have product and sum types and we will provide exhaustiveness checks, as mentioned in the paper. So the particular example that you mentioned “when you have an ADT and you forget to pattern-match one of its variants” will be detected.
This is unfounded and I’d strongly argue that separating implementation from specification will lead to better guarantees, as it reduces the chances of you changing both at once and the chances of bugs accidentally creeping in. I have written a more detailed reply here: A case for inline type annotations - #12 by josevalim
But more importantly, this is reducing a type system discussion to a syntactic point of view, while the semantics and soundness play a vital role when discussing trust. For example, this article discusses how we approach gradual typing and how it will lead to correct types when compared to TypeScript, where the type at runtime may not match the one returned by the type system.
Thanks for the reminder. I’m seriously looking forward to it because that’s one of my top 5 gripes with Elixir.
And just to remind, Elixir is my #1 to this day, hobbyist and professional programming included. And that’s not forced on me, and it’s not an accident either.
I personally don’t think the type system will be enough to make anyone coming from static typed languages happy with Elixir.
As far as I understand Elixir still gonna be dynamic typed given the nature of the runtime, the type checking features will provide some guardrails but not all the guarantees that someone from static typed languages expect(namely, dealing with unexpected scenarios in runtime). Also, the ecosystem will need to grow towards the type system providing tooling for data validation and data transformation that conforms with the type system.
I think the type system gonna be more “helpful” for people that are actually not strongly biased on static typing, but isn’t used to deal with some aspects of a dynamic typed runtime.
The type system will provide the same guarantees as a statically typed language in the absence of dynamic (which could also be enforced). The big question is really how the ecosystem will adapt but it is not about the capabilities of the type system itself.
I understand that all we can do in this thread is speculate but it is important for us to not misrepresent the capabilities of system, as that could lead to confusion down the road.
I’m not talking about the capabilities of the type system or how expressive it gonna be. My point is that the runtime will still be dynamic, and unless the type system wraps the beam on those guardrails, when the runtime have unexpected responses the error that should show on a type check gonna happen in runtime.(i’m probably remembering something wrong, but iirc there were some reports of unexpected behaviors of gen_tcp a while ago), and for people that comes from static typed languages those type of situations wont be just a nuisance. they expect the type system to protect from that kind of stuff.
But I totally agree that biggest unknown is how the ecosystem gonna embrace it and the tools that gonna be created around the type system.
You are mixing two completely unrelated concerns. If you have OCaml and you type check it, the behaviour of the program should be the same regardless if you are compiling it to run on native code or JavaScript. If you have an unexpected response, then it either means a type system bug or a bug in the language runtime (most languages have a runtime that is implemented in something else than the language).
Both types of bugs are not specific dynamic runtimes, such as JavaScript or BEAM. All type systems can have bugs and all language runtimes can have bugs too. The potential difference is that a bug when compiling to JS/BEAM will most likely be an exception while compiling to native will most likely lead a segmentation fault (since terms are not boxed).