Minute 43:00
I think we need to be clear that we talking about static strong typing.
My impression is that it’s related to ROI/opportunity costs.
Hypothetically any process can send data to any other process based on a pid
. But the value of that pid
is allocated at runtime. So static analysis would have to superimpose some (artificial) static constraints on the process structure in order to check whether data being sent from one process is an accepted type in the destination mailbox.
But given that interprocess communication is typically about protocols there is a temporal aspect that static typing can’t cover. A perfectly “well formed” message in the mailbox about to be pulled out can still be “invalid” with respect to the current runtime state of the process.
Then there is the cultural aspect:
https://elixirforum.com/t/what-do-you-think-is-missing-or-needs-improving-in-elixir/2369/47
And I must say that personally type errors are the errors I very rarely make.
i.e. how much value could improved static typing add to the ecosystem - considering the potential cost of going beyond what dialyzer is capable of doing right now and might that effort be more productively spent elsewhere in the ecosystem?
It’s easy as a technology consumer to want static typing but from the technology provider perspective the cost could be viewed a prohibitive in relation to the perceived benefit (and the potential collateral damage via additional complexity).
Microsoft started developing TypeScript in 2010, 14 years after the introduction of JavaScript. It was made public in 2012 but really didn’t gain significant traction until some time in 2016 likely spurred on by the announcement of Angular 2 in 2015.
TypeScript/Dart/CoffeeScript/JavaScript
React + TypeScript seems to have only really become a thing around 2017. Visual Studio Code was likely another factor helping TypeScript adoption.
TypeScript/Dart/CoffeeScript/Visual Studio Code
To some extent TypeScript adoption has more to do with tooling (VS Code IntelliSense, Webstorm Refactoring Support, Resharper) rather than deliberate adoption of static typing.
This comes at the complexity cost of language constructs (Advanced Types) that exist solely so that the (library/framework) developer can compile-time-programmatically describe to TypeScript how to extract the types necessary for successful type inference.
This isn’t typically a problem for library/framework users but can put a significant additional burden on the library/framework developers. Given the things that you can do in JavaScript (and therefore TypeScript) one will have to become extremely competent with TypeScript’s “type extraction/manipulation language” or perhaps revert one’s code to a more mundane, less elegant, more restrictive, less generic version.
In statically typed languages like OCaml it usually works the other way around. You have to become a competent type wrangler before you get to use some of the more advanced techniques (hopefully safely, as there is typically an unsafe escape hatch).
All of this should create a sense of how much time and effort it took to get TypeScript to the point of where it is today - and this is in an ecosystem with significantly more resources to throw at the problem than the BEAM ecosystem.
And TypeScript doesn’t have to deal with complexities of code replacement or concurrently collaborating runtime processes with mailboxes that can receive messages from any other process in possession of their pid
.
So the effort required to push beyond the current capabilities of Dialyzer should not be underestimated - and must be carefully weighed against any potentially gained benefits.
// UnknownMessage is a type that has no values, it can never be constructed!
But UnknownMessage
is acknowledging a fundamental limitation:
- You can only type the messages you expect to receive.
- It also implies that you cannot put a stop to an unexpected message from being sent to you.
And it is (2) that most static typing advocates would have a problem with.
People wouldn’t be too happy with TypeScript if there was no error when a function is called with the wrong data type - and instead of raising a compile time error the function call is simply excluded from the generated source code.
I would suggest trying it out. Write a simple program that uses distributed erlang. Then do a blue-green deploy. Change the structure on a message that gets passed over distribution (say, turn a string into an atom) on a node, and then attach it to the network. How would you handle this situation with static types?
I’m not saying it’s impossible to handle but consider how, say, it’s “done” in golang (gRPC) and whether that amount of boilerplate/complexity is worth it.
This problem doesn’t just apply to pids and processes, but any form of mutable data. Erlang doesn’t have mutable lists, but imagine that it did. An in-place update to the list that changed it from having integer elements to float elements would violate the type system guarantees in the same way.
For static types to work (without dependant typing) the types can never be changed at runtime, this limitation is unavoidable.
In practice I think this isn’t a problem. If anything we expect this to be the case even in dynamically typed programs, especially in languages like Erlang/Elixir where immutability is the norm.
I’m extremely biased here but I think this is the highest value single area in which we can work on within the BEAM ecosystem. I would be extremely happy if someone succeeds in adding useful gradual types to Erlang or Elixir, and I’m going to continue to work on Gleam.
As for time cost, Gleam is more or less ready to be used now. It only took 2 years
Oh yes definitely! A type system does nothing at runtime to protect you from code that doesn’t adhere to the contract it enforces. This could happen when being sent messages from untyped code (i.e. written in Elixir or Erlang without type checking), or when there is a bug in your types.
There is a trade off here, and it’s up for the programmer to decide whether the advantages of static typing are worth the accompanying disadvantages. Personally I think it is a very good trade
The same way that any API is changed, through backwards compatibility changes and versioning. This is also the case in dynamically typed programs too, the only thing different is that when using a static type system the computer may be able to infer when you have made a mistake that violates your contract.
Just FYI, if you want to play with something like the BEAM that has types, you might enjoy a look at Pony. This talk compares the two approaches, which did help me understand some of the tradeoffs.
I just want to confirm what has already been mentioned a number of times in this thread: the reason for not having static typing in Erlang was the absolute need for doing dynamic code upgrades of running systems. This was an absolute requirement and if we couldn’t do it then our language/system wasn’t interesting. There aren’t many systems using it today as most try to do rolling upgrades but we didn’t have that option back then so we had to be able to upgrade a running system. Remember we are talking late 80’s early 90’s here.
Some evil tongues says it because we didn’t know how to write type-checking systems but that is totally ridiculous. As we had taught ourselves to write compilers, and done a number of them, to implement virtual machines, and done a number of them, and to implement run-time and os-like systems, and done a number of them as well, then implementing type-checking would not have been especially difficult. It’s not rocket science and it wasn’t then either. Just RTFM.
Whether you like dynamic typing or prefer static typing that is a matter of choice but as I have said for us it was not an option.
On a side note I would say probably the main difficulty with implementing a statically typed language on top of erlang/OTP/BEAM is interfacing to the world outside. I mean you can call any function with any arguments and send any message to any process so the problem how do I make the statically typed part of the system “safe”. And no, processes can’t hide either as with the Process.list/processes built-in functions I can see all processes.
A final note: Erlang/elixir/BEAM is actually strongly typed but dynamically typed not statically typed. If we want to be picky.
There aren’t many systems using [hot code reloading] today…
I may be mistaken but I believe Phoenix live reload uses erlang’s hot code reloading, in which case upwards of 70-80% of the people on this forum are using hot code reloading, even if they don’t know it.
The Typescript is example how OO people (Microsoft ) design types …
If I want good type system for JS I would go for Reason ML https://reasonml.github.io/
The Beam hot reloading was designed not for static typed langue
If I want good static typed type system I prefer langue that was implemented for types from ground up …
Phoenix code reloader is different. It just watches for changes and recompiles the files. Then after browser refresh, you have the latest compiled code for your request.
Wouldn’t that become rather heavy if it is checking for every request. How does it reload the the recompiled modules? Is it just loading them in or is it using some mechanism like OTPs code_change
callback to allow old version to “migrate” to new version?
OTP purposely does not have some mechanism to check code and automatically recompile and reload. Apart from the extra load in doing this (when would you do it?) the system might have/probably will have special requirements when upgrading code.
I think it‘s not heavy to recompile a couple if changed modules. This is also only in dev environment.
I don‘t see any code_change stuff in there, so I guess it‘s just recompile and that‘s it. Maybe someone else more knowledgeable can share some details?
AFAIK, the code reloader simply recompiles modules in files that have been changed (and their compile time dependents) and reloads the current page so that it reflects the changed code. It does not update your genserver state.
Hah thanks for clearing that up for me!
Are you sure? I would have expected it to use the normal OTP code upgrading features, and OTP uses the code_change
callback to update the state of a gen_server.
@lpil Very interesting! But wouldn’t it be easier to create a tool for gradual typing instead of creating your own entire language? I am just curious!
@chrismcg I saw the video, thanks for sharing!
It turns out that they started working on a type system! (yeey)
And then they stopped … (ohhh)
At least they explored some alternatives, which is always good!
Thank you everyone for your kind feedback!
I have learned quite a lot here and I feel better for it.
I take it that the main reason no one is adding a typing system to Elixir now, is because of ROI and that the ecosystem currently doesn’t have enough resources to devote to this challenge.
Additional, it is also very interesting to have the historical reason on why erlang didn’t have strong static typing (thanks to @rvirding ).
I guess static strong typing is a lot more complex than I initially thought !
I think the gap isn’t as big as you might think. To add a type system to an existing language it needs to be gradually typed and carefully tailored to the semantics of the existing language, while if you create a new language you can design the semantics and the type system at the same time so they complement each other well.
The end result is somewhat different too. With a gradually typed language you need to add type annotations to the entire program in order to get full type safety, while with more conventional Hindley Milner type systems no annotations are required at all and you always get total safety.
I wouldn’t say the implementation is that much more complex, the complex bit is making the choices. It always a trade-off: if I do it this then I will get XXX but I will lose YYY, which is best for our needs. For example ending up a functional language, or having non-sharing processes, or being dynamically typed.
TANSTAAFL
‘practically’ ^.^;
Exit signal would be an expandable variant type.
Messages between processes should be black boxed, that’s the only logical way to do it, lol. ^.^
Gleam is looking awesome, people should look at it!
Black boxed type! ^.^
Of course. In 1 you can only handle things you know about anyway, it’s like receiving JSON, you have to validate it and handle when that fails. In 2 of course, that’s how the beam works, you need to handle the unhandled message case, just like you do in Elixir now. It’s a standard boundary problem, which like with JSON means you need to parse it out somehow, which a decomposable blackbox’d type can do.
Same as you do now, match on what you can, handle in a fallback (log a message?) what you can’t.
Golang is very very verbose as a language, something built on the BEAM, like potentially gleam, can do far better.
You only need to verify what stays within the typed side. For things coming out either blackbox it and verify if you want to handle fallback, or just “let it crash”, as most erlang/elixir functions do when handed wrong data already.
Well it could program in guards, but those can potentially be quite slow (validating a list is only integers for example). ^.^;
I think the pattern of the BEAM already can still hold in a typed language on top, crash with invalid data, parse external data to something useful internally, etc…
Yep yep, I was waiting for this to pop up, the thread OP meant static typing, not strong typing. ^.^
Phoenix_live_reload is a dev only tool, it watches the filesystem for changes and recompiled and hot-loads changes. You wouldn’t run it at release time.
Now for the bit I was waiting for someone to mention but it doesn’t look like anyone did. Don’t forget that the Gradualizer
Static Typer for the BEAM is in progress. Still has some work left but it can strongly type most code already. It has revealed that a lot of typespecs in Elixir are just outright wrong (and some in erlang though not as many). ^.^
It has the ability to override types though so you can ‘fix’ them up without editing their source.
And yes, I have a Gradualixir
library that wraps it for ease of running as a mix task, though do not that Gradualixir is in heavy flux so if it doesn’t compile then please send a PR or at least an issue with the error and a reproducible file (or ignore that if it happens on all) to me. ^.^
If what you want is stricter build time guarantees that your functions actually implement the typespecs in @spec
s or @callback
s, you might find my library Hammox very useful. It contains its own strict type-value checker and you can set it up to automatically check that function calls you do in tests match the defined typespecs.
It’s especially useful when using mocks — you want to ensure that the mock actually implements the same type signature as your real function. Hammox is actually an enhanced version of José Mox (and the Mox readme now links to it too). You set up behaviour based mocks and all calls to them are automatically type checked.
For me it has dramatically decreased my needs for strong static type checking because I find that dynamic type checking like this, when done rigorously, can be almost as good as static. It also encourages you to actually test all code you write so it’s a win-win!
Forgive my ignorance, but I’m very curious (I do not have a background in compiled languages and only a basic understanding of dynamic code upgrades). Is the takeaway here that having a statically typed language would have made dynamic code upgrades impossible because you can’t have one version of the code that statically pegs a variable to a specific type while another version of the code pegs it to a different type? Would static typing end up pointing to different regions of memory? Whereas dynamic typing would allow for “run-time” resolution of the variable values in memory?
Thanks for sharing your wisdom/experience!