Hear hear! If only Elixir had static typing. ^.^;
When I found anew statically typed languages (at least for me, I started with them ~13y ago) like for example Rust or Elm (just examples), I was really, really hard asking myself why would I ever go back to dynamically typed languages. And what really is keeping my with Elixir (aside from community etc.) is how easily you can write truly concurrent and fault tolerant software.
What does really helps me write in Elixir is Dialyzer http://erlang.org/doc/man/dialyzer.html and to some extent Dialyxir https://github.com/jeremyjh/dialyxir Because even when I don’t write @specs it’s often smart enough to show flaws with my code only basing it on context.
Although Dialyzer won’t find that you have non existing module name in your path definition
get("/hello", IdontExist, :index) i still think that mentioning about Dialyzer might help you with some problems you may ecounter.
Precisely the same with me, erlang was the only really big dynamically typed language I used before (bits of python at times), elixir mostly replaced erlang due to it’s macros and ecosystem (I still prefer erlang’s syntax overall).
It’s only a positive typer though, so it can catch egregiously wrong uses pretty often, but it defers to assuming that what the user wrote was correct.
And this is one of those cases. It only knows that an atom is being passed in, it doesn’t know that it needs to be a valid module as one example.
It’s not only an issue of static typing, though. Late binding means that the module can be introduced at run-time at any point. Even if you did somehow say “Yeah, this is a module and I know that”, it seems to me you’d have the issue of basically saying “No, I’m going to force all modules to be defined at compile time and they should remain the same”, which may or may not be desirable.
The current situation is “I’m gonna call whatever I have and what exists at that moment is what I’m going to get”, which is about as far from forcing all modules to exist and stay static at compile-time as you can get.
Well releases on the BEAM do atomic updates of a set of modules at a time, and different processes inside the BEAM can be running 2 different versions of the code, so as long as messages are block-boxed then it would work fine in 99% of cases (and in the rest then you should know what you are doing).
Is this philosophy reflected in other areas of the Phoenix ecosystem like Ecto? I’m wondering if the framework is a good fit for me and my projects: I want the compiler+linker to do as much work as possible, not less. I’m looking for a framework that checks anything that can be checked, without imposing coding or conceptual overhead.
E.g., Swift Vapor’s type safe routes appeal to me a lot, but the framework isn’t very mature at the moment.
Just to be clear, I can’t speak for any libraries out there. It was more a general comment on the handling of modules passed via variables.
Understood! But since I don’t know the libraries well, maybe you can weigh in: is this pattern of “soft references” common? - modules referred to by name?
It’s built in to the way the BEAM VM works. Anything one wants more than that needs to be done by whatever compiler they use or via other passes either before or after.
[Hi, I’m coming back to this after a while…]
I’m wondering, “Why not?” Wouldn’t we want to achieve correctness first, and then optimize second? I don’t see the problem with recompiles.
It’s not all that funny. If you just quickly add a route and every time you change something, even just a small type like 1/3 of your modules recompile (controllers often come in masses) without any reason for them to do so. Depending on the size of your project this could take a while.
There are a few talks of Renan Ranelli on the topic of recompilation and how quickly things can snowball into being a real problem and not just a second or two here and there.
That seems to cut the other way though: I rarely add new routes in web apps. And when I do, it’s very important — a new route is a new feature, after all.
I guess I also don’t see why adding a new external link symbol reference to a routes file would cause every controller to need recompilation. I’d think the dependency would run the other way: Only the newly referred to controller would require recompilation.
- The controller can be non-existent during compilation, you can always define expected module during runtime using
Code.eval_quoted(not that it is good idea in general case, but runtime compilation of modules sometimes is helpful).
- If you would pass exact function to be called by router it could prevent code upgrades as these works only on remote calls (i.e.
apply(module, function, ), which are the same, but it will not work on
function()calls, even if
- Recompilation of the
Routerwould mean recompilation of
Router.Helpers, which in fact would require recompilation of most views and controllers by default, not so fun when almost whole Phoenix application would need to be recompiled on typo fix.
- In such case Phoenix would need to implement their router almost from the ground as current approach is a wrapper over
Plug.Routerwhich takes only module name, because it is perfectly valid code in Phoenix to use non-
Phoenix.Controllerplug as route receiver, ex.
get MyPlug, :foocould be handled by
defmodule MyPlug do @behaviour Plug def init(opts), do: opts def call(conn, data) do Plug.Conn.send_resp(conn, 200, to_string(data)) end end
You’re right. I’ve messed up the direction of the compile time dependency. It’s not the router changes, which would make the controller recompile, but any changes to controllers would make the router recompile. As @hauleth described, this is really bad, as it easily snowballs to basically recompiling all of your views, which depend on the route setup of the router to generate urls/paths. So a change in one controller can easily result in recompiling hundreds of modules.
The big problem here is that a compile time dependency exists as soon as a valid module name is seen at macro expansion. Without the compiler knowing how the module is used by the macro this means any change in the module could modify what the macro does. So the compiler needs to recompile the module using the macro whenever a module changes, which was seen when expanding the macro.
This would happen to the router if phoenix would be using the full module names of controllers instead of just the namespaced versions.
There’s no “ensure the module exists” in macros. Either you’re recompiling on each change of seen modules or you don’t depend on them at compile time at all.
A simple controller test would also catch this type of error and only add a few milliseconds to your test suite.
I agree! But (coming from Ruby) I’ve gotten tired of writing tests for things that other languages’ tooling simply handles for you.
Another issue is that a failing test isn’t as good as a linking or type error. Because the test failure doesn’t tell you why it failed.
Yep, and this is where I feel like Elixir/Phoenix isn’t a good fit for my projects. Because for me, “code upgrades” are always easy, and actually helped by compile-time checks. In other words, I’m not deploying code that must handle live hot fixes.
You aren’t forced to do such updates, but some people are (ex. when you have application that does live streaming of music or video-calls), you shouldn’t prevent them from using language features. And as I said earlier, you can always define module in runtime, so there is no way to have static typing on that.
I have posted this on another thread but, for completeness, if you don’t want your code to compile in those cases, you can enable warnings as errors:
[elixirc_options: [warnings_as_errors: true]]
Or if you want to enable it only for xref:
[aliases: ["compile.xref": "compile.xref --warnings-as-errors"]]