Oh yeah, another strong typing thread. As if we don’t already have enough of them. If Elixir had strong typing, I would not use it.
Using Typescript as an example is funny, as Typescript is horrible to work with. Having to wrap stuff around if clauses because a variable could “possible be undefined” even if that will practically never happen because other conditionals prevent that already is hilarious.
Dear Philip, I am not saying Strong Typing is superior and that you should use it. You have the right to your own opinion and I understand if you don’t like strong typing.
All I am saying, is that for the people that like strong typing (like me) dialyzer doesn’t seem to cut it.
Please note I am not complaining about Elixir nor dialyzer, they are both good tools, I am just trying to understand why some of its features are not as developed as they could be in regard to some areas.
I apologize if I made you feel unconformable with this post, that was not my intention.
Dialzyer is designed to be applied to existing untyped Erlang codebases in a useful fashion, without having to rewrite the codebase to satisfy the type checker. To achieve this it takes the “no false positives” approach, sort of meaning if it is possible for the code to be successful at runtime it will not emit an error, instead it only errors if it can infer that the code will definitely crash.
The trade off here is that it offers less safety than a conventional type system in exchange for ease applying it to existing code. Personally it’s not a trade off I enjoy, which I why I’ve been making Gleam, a statically typed language for the BEAM.
Typescript and Flow are great examples of projects to apply static types to an existing language. I think their gradual typing (which allows you to incrementally add type checking to a codebase) is much better than the approach that Dialyzer takes to achieve the same thing. Once fully applied the type checker can say that the codebase is completely safe, which is a great thing to be able to achieve.
What stops this existing for Elixir is largely programmer time and knowledge constraints. A system like this for Elixir or Erlang would take a long time to implement, and we don’t have a suitable team of people with the time to work on this.
If we could find some companies to pay for people to work on this full time we could have gradual typing for Elixir.
It has a different design with different goals. It’s not possible to it to become like Typescript without it effectively becoming an entirely different tool.
I believe it’s a good thing to want to improve the tools we use! Lots of people would get great value from static typing in Elixir, and it hurts no one to discuss these things. If you are not interested the best thing to do is to not participate in the discussions on static typing.
In Gleam it’s fn(Pid(UnknownMessage), reason) -> msgSee here!
I think we need to be clear that we talking about static strong typing.
My impression is that it’s related to ROI/opportunity costs.
Hypothetically any process can send data to any other process based on a pid. But the value of that pid is allocated at runtime. So static analysis would have to superimpose some (artificial) static constraints on the process structure in order to check whether data being sent from one process is an accepted type in the destination mailbox.
But given that interprocess communication is typically about protocols there is a temporal aspect that static typing can’t cover. A perfectly “well formed” message in the mailbox about to be pulled out can still be “invalid” with respect to the current runtime state of the process.
And I must say that personally type errors are the errors I very rarely make.
i.e. how much value could improved static typing add to the ecosystem - considering the potential cost of going beyond what dialyzer is capable of doing right now and might that effort be more productively spent elsewhere in the ecosystem?
It’s easy as a technology consumer to want static typing but from the technology provider perspective the cost could be viewed a prohibitive in relation to the perceived benefit (and the potential collateral damage via additional complexity).
This comes at the complexity cost of language constructs (Advanced Types) that exist solely so that the (library/framework) developer can compile-time-programmatically describe to TypeScript how to extract the types necessary for successful type inference.
In statically typed languages like OCaml it usually works the other way around. You have to become a competent type wrangler before you get to use some of the more advanced techniques (hopefully safely, as there is typically an unsafe escape hatch).
All of this should create a sense of how much time and effort it took to get TypeScript to the point of where it is today - and this is in an ecosystem with significantly more resources to throw at the problem than the BEAM ecosystem.
And TypeScript doesn’t have to deal with complexities of code replacement or concurrently collaborating runtime processes with mailboxes that can receive messages from any other process in possession of their pid.
So the effort required to push beyond the current capabilities of Dialyzer should not be underestimated - and must be carefully weighed against any potentially gained benefits.
// UnknownMessage is a type that has no values, it can never be constructed!
But UnknownMessage is acknowledging a fundamental limitation:
You can only type the messages you expect to receive.
It also implies that you cannot put a stop to an unexpected message from being sent to you.
And it is (2) that most static typing advocates would have a problem with.
People wouldn’t be too happy with TypeScript if there was no error when a function is called with the wrong data type - and instead of raising a compile time error the function call is simply excluded from the generated source code.
I would suggest trying it out. Write a simple program that uses distributed erlang. Then do a blue-green deploy. Change the structure on a message that gets passed over distribution (say, turn a string into an atom) on a node, and then attach it to the network. How would you handle this situation with static types?
I’m not saying it’s impossible to handle but consider how, say, it’s “done” in golang (gRPC) and whether that amount of boilerplate/complexity is worth it.
This problem doesn’t just apply to pids and processes, but any form of mutable data. Erlang doesn’t have mutable lists, but imagine that it did. An in-place update to the list that changed it from having integer elements to float elements would violate the type system guarantees in the same way.
For static types to work (without dependant typing) the types can never be changed at runtime, this limitation is unavoidable.
In practice I think this isn’t a problem. If anything we expect this to be the case even in dynamically typed programs, especially in languages like Erlang/Elixir where immutability is the norm.
I’m extremely biased here but I think this is the highest value single area in which we can work on within the BEAM ecosystem. I would be extremely happy if someone succeeds in adding useful gradual types to Erlang or Elixir, and I’m going to continue to work on Gleam.
As for time cost, Gleam is more or less ready to be used now. It only took 2 years
Oh yes definitely! A type system does nothing at runtime to protect you from code that doesn’t adhere to the contract it enforces. This could happen when being sent messages from untyped code (i.e. written in Elixir or Erlang without type checking), or when there is a bug in your types.
There is a trade off here, and it’s up for the programmer to decide whether the advantages of static typing are worth the accompanying disadvantages. Personally I think it is a very good trade
The same way that any API is changed, through backwards compatibility changes and versioning. This is also the case in dynamically typed programs too, the only thing different is that when using a static type system the computer may be able to infer when you have made a mistake that violates your contract.
I just want to confirm what has already been mentioned a number of times in this thread: the reason for not having static typing in Erlang was the absolute need for doing dynamic code upgrades of running systems. This was an absolute requirement and if we couldn’t do it then our language/system wasn’t interesting. There aren’t many systems using it today as most try to do rolling upgrades but we didn’t have that option back then so we had to be able to upgrade a running system. Remember we are talking late 80’s early 90’s here.
Some evil tongues says it because we didn’t know how to write type-checking systems but that is totally ridiculous. As we had taught ourselves to write compilers, and done a number of them, to implement virtual machines, and done a number of them, and to implement run-time and os-like systems, and done a number of them as well, then implementing type-checking would not have been especially difficult. It’s not rocket science and it wasn’t then either. Just RTFM.
Whether you like dynamic typing or prefer static typing that is a matter of choice but as I have said for us it was not an option.
On a side note I would say probably the main difficulty with implementing a statically typed language on top of erlang/OTP/BEAM is interfacing to the world outside. I mean you can call any function with any arguments and send any message to any process so the problem how do I make the statically typed part of the system “safe”. And no, processes can’t hide either as with the Process.list/processes built-in functions I can see all processes.
A final note: Erlang/elixir/BEAM is actually strongly typed but dynamically typed not statically typed. If we want to be picky.
There aren’t many systems using [hot code reloading] today…
I may be mistaken but I believe Phoenix live reload uses erlang’s hot code reloading, in which case upwards of 70-80% of the people on this forum are using hot code reloading, even if they don’t know it.
Wouldn’t that become rather heavy if it is checking for every request. How does it reload the the recompiled modules? Is it just loading them in or is it using some mechanism like OTPs code_change callback to allow old version to “migrate” to new version?
OTP purposely does not have some mechanism to check code and automatically recompile and reload. Apart from the extra load in doing this (when would you do it?) the system might have/probably will have special requirements when upgrading code.
AFAIK, the code reloader simply recompiles modules in files that have been changed (and their compile time dependents) and reloads the current page so that it reflects the changed code. It does not update your genserver state.