I understand the first 2 milestones, but can somebody say what’s a difference between the @spec attribute and 3rd milestone?
The third milestone is to introduce the (most likely) $-prefixed type annotations for functions, with no or very limited type reconstruction: users can annotate their code with types, but any untyped parameter will be assumed to be of the dynamic() type. If successful, then we will effectively have introduced a type system into the language.
Can’t Elixir compiler use already existing specs instead of adding yet another annotation? We have any() and term() which should be more than enough as replacement for dynamic().
Also is adding a type system like in the post means replacing part or even all features of tools like dialyzer by Elixir compiler? If so I hope the compilation would not be longer because of creating plts …
There are a few details I don’t appreciate about how we implemented typespecs (for example, I find them too verbose) but the deal breaker here is that typespecs are not capable of expressing the proposed type system, so we will need something new.
This is great to hear. Huge congrats to everyone involved. I’m sure I’m not the only one here eager to see these ideas translated to code.
I’d like to especially extend congratulations to Guillaume Duboc – not every PhD candidate has the opportunity for their work to make such a direct and significant impact. It must be extremely exciting!
is there any information if the new type system will support literals as types?
one of the limiting factors for me to use dialyzer types is that it only support atoms for specific values when building types.
Working with Elixir/Phoenix daily, you slowly get used to some level of pain and suffering, typo here, wrong return tuple here, wrong atom elsewhere. You start see it as part of the job.
But then, you switch back to ASP.NET for another project, like we do in my team, and you find out the pain is gone, there is no more suffering, you write more but you can relax, you can trust the system, if it compiles you are good, if it doesn’t it tells you right away where you made a mistake. With ReSharper and continuous compilation I can see all compilation errors right away, after every change.
As much, as I prefer Elixir/Phoenix for production systems and we introduced some into DHL, this is an enormous problem for us, the trust in the code is simply not there.
Just want to add to the chorus here, I am very excited by this.
As someone fairly new to the Elixir community, is there any way for me to contribute to this work? I have limited free time, but I’d love to help out with whatever I can.
The type system allows us to support any literal as type but it is still undecided if we want to do that. Opposite to the Dialyzer, an actual type system will be strict about the expected arguments. Imagine this code:
Because you are passing any atom to the function, the type system will warn about a type error. While Dialyzer would never warn. So you would need to write this instead:
case String.to_existing_atom(string) do
:foo -> foo_or_bar(:foo)
:bar -> foo_or_bar(:bar)
_ -> raise "expected foo or bar"
end
Therefore, more specific types will require you to prove that you are passing those specific types and that may actually make the type system more annoying than helpful.
Since there are trade-offs here, it is a decision we will make later once we have a better feeling for the type system.
I foresee the problem where code will need to be annotated with both (at least for a while). And where people would be loathe to convert their typespecs to the new syntax.
While there are no user-facing changes yet, this is something to consider.
Another minor, personal gripe I’ve always had with typespecs: they are outside the function, and you need to update code in two places when you changed something. I’d love for the specs to be inline, but then no idea how to deal with functions with multiple definitions
I am afraid there isn’t much we can do. Using typespecs for the type system will definitely be harmful in the long term, as types won’t be as precise as they could be. Our suggestion will be to rewrite the typespecs as you convert, especially because they are not used to drive Dialyzer’s inference (their benefit comes from documentation/specification, which you will get from the new type system as well).
I actually think this is harmful as it couples the types with the implementation and you can accidentally change the signature as you refactor a function from one ↔ many clauses. Also, we already have pattern matching and default arguments in signatures. Adding type declarations to the mix will allow our function signatures to become really really large and somewhat unreadable.
I’ve been wondering how you were going to approach this. I don’t think it’s just a problem with literals, though. This pattern where we restrict the input with a pattern match and pass in values that cannot be proven to be correct seems very common. And if an invalid value gets passed due to a bug, that seems to work well with the “let it crash” approach.
I’m not sure if there’s a way to make these two approaches work together. Maybe it would require changing the call site to say “Trust me, this value will be valid” and then the type checker would basically fall back to the optimistic Dialyzer style where it would only check that it is possible for the value to be valid.
Anyway, I’m sure you’ve spent much more time thinking about this, so I’m pretty excited about whatever you end up doing
Would it be a warning that can be ignored or an error?
I like the idea of using types as documentation, but it’s extremely annoying to document functions that expect a map with specific string keys.
Could it be parametrized? I know it’s difficult to take that decision now, but just as the formatter, some stuff initially was not supported but in later versions more configuration was added to the process. maybe something that we can set as compiler options in mix.exs
You are correct. My worry (which may or may not be unfounded) is that literals would accentuate the problem and require additional functions. For example, if we break integers into pos_integer or zero or neg_integer, we may need to add convenience functions such as String.to_positive_integer to make sure we propagate more type information. With atoms, that’s a smaller concern, because dynamic creation of atoms is more uncommon.
Something else to decide, as you described, is what happens when you, on purpose, want to ignore a return type of a function. :ok = some_fun() where some_fun() also returns {:error, _}. Is that a type error or not? What about case some_fun() do but you match only on some of the possible return types?
We could ignore by default, but then it means that if you matched on all results of some_fun() and then some_fun() adds a new return type, a type error could leak through the system. In this scenario, we may need to introduce functions such as case! which means: “we may raise at runtime for unmatched clauses because we are not forcing you to check them all”.
So we talked a lot about those cases and part of the adoption challenge will be in figuring out how frequently those happen and what mechanisms we will have around them.
A warning but most likely non-ignorable.
It may be that you will need to cast your string keys into atom keys and then rely on atom keys through the system. This is pretty much similar to all other typed languages: you need to cast the data into your types as it enters the system.
It is too early too say but we most likely won’t add any sort of configuration around this, otherwise will generate incompatibility between projects (some will be more precise than others and that can be a recipe for disaster).