How to make Dialyzer more strict?

If we have function with typespec that always returns value with incorrect type

@spec hello(integer()) :: map()
  def hello(int) do
    case int do
      1 -> :foo
      _ -> :wtf
    end
  end

then Dialyzer generates warning, good. It is as expected.

done in 0m1.08s
lib/hello.ex:16: Invalid type specification for function 'Elixir.Hello':hello/1. The success typing is (_) -> 'foo' | 'wtf'
done (warnings were emitted)

But if

  @spec hello(integer()) :: map()
  def hello(int) do
    case int do
      1 -> %{hello: "world"}
      _ -> :wtf
    end
  end

then

done in 0m1.07s
done (passed successfully)

Why it not generates any warnings when there is clause where wrong value is returned from function?
And if it is feature, not bug - how to make behaviour more strict?
It would be perfect if there will be flag to generate warnings when at least one possible clause has incorrect type.

7 Likes

Dialyzer uses optimistic type checking.

So if it can find one route through the code which type checks, then it is happy.

Crazy, huh?

Function guards enforce input strictness, but AFAIK, there’s nothing straightforward that enforces output strictness.

IMO, cases / ifs which output different types is probably a code smell, perhaps someday the Elixir compiler can show a warning in future…

Dialyzer is a terrible tool, compared to what you find in ML languages.

4 Likes

The reason for this is that dialyzer was added afterwards to a very dynamically typed language, then Erlang and now Elixir, with the requirement that it was not allowed to change the language. At all! So it does what it can. You will find that it doen’t just type a function but type checks a function and the calls made to it together to see if they agree.

Also in some ways dialyzer basically ignores a function spec when type checking so you don’t really need one to get the type checking. If you give one it will check that it is consistent with what it can work out about the function and it will use it when reporting type errors, which can be very useful. Actually same with type defs. Though both are extremely useful for documentation.

The compiler always ignores type specs except for some basic syntax checks.

Depressing no? :smile:

12 Likes

Question is: can this be improved in the future? TypeScript is doing a similar thing for JavaScript. On the bright side, the coverage of different use cases is increasing while keeping the compilation time fast. The bad thing is that IMO the type system is kind of exploding with complexity.

Seems like Elixir could add opt-in strict type checking.

5 Likes

Maybe Elixir could just be what it is and other languages on the BEAM can build a foundation much more suited to this kind of thing. I would rather have something designed from the get-go to be strict than trying to bolt it onto Elixir/Erlang.

What I mean is that I think you’ll see better results trying to design for something like this much earlier and languages like Alpaca are probably a better bet in this regard. If the people who really want a strongly, statically typed on the BEAM (myself included) simply tried to help with Alpaca, maybe we could end up with something that could even become our BEAM language of choice.

5 Likes

I’d rather have this in Elixir 2.x. The downside of a new language is the fragmentation of the community. What’s the point of a new language if you can’t get hired to work in it? It’s still tough to find a job in Elixir.

4 Likes

I work with both Erlang and Elixir, i.e. I work with the BEAM. Would I have issues trying to convince project leaders to let me use LFE? Yeah, but the value proposition of something like LFE isn’t as obvious as a statically typed language that could allow for far less cognitive load. The value of a language like that would be self-evident at some point and it’s a much easier comparison than external statically typed languages vs. Elixir as well.

Not to derail this too much, but I wouldn’t want to work with someone who seemed to have issues with the Erlang/Elixir divide, because all the important things in Elixir are the important things for the BEAM in general. The same would apply to any other language on the BEAM, really.

Would elixir itself really be changed to bolt on type more strict type checking? I mean from my naive standpoint the constucts for using types are there, it’s just how dialyzer is using those, which makes people wish for a different solutions.

1 Like

Not for more strict type checking only, perhaps, but certainly for a better type system. If you want a decent language in terms of type system you should implement it in compiler passes and probably just output BEAM bytecode.

Probably my mistake, I assume in these kinds of conversations that we’re talking about going the whole way and supporting actual competent type systems, though in this case that’s probably false.

I’m not an expert on types but we already have type declaration syntax that the community actively using in Elixir. The problem is how Dialyzer interprets them (too loose). Maybe a stupid question but can’t we just build source code level (Elixir AST) analyzer as a hex package (archive?) that reads source code like Code.string_to_quoted |> Macro.expand, and uses the type specs AST to match them against the source code AST?

We are very interested of stricter typing in our company and are willing to put a remarkable bounty on this project, if it is viable option ofcourse :slight_smile:

5 Likes

Yes, in fact it could even be done via a library, this is what my TypedElixir and MLElixir experiments do (in different ways). :slight_smile:

(Wish I could get time to work on it some more, it’s so fun making such things!!)

Alpaca has promise, but it is more ‘pure’ in typing, like having OCaml compile to Javascript. It doesn’t have great integrations with Elixir and macro’s just don’t work at all, in addition it can’t use the types that the rest of the Beam more focuses on, as well as it doesn’t black box PID’s but instead tries to ‘type’ them (which seems inherently faulty on a BEAM language to me), etc… It is good as a more pure language, but not as something to interact with Elixir libraries.

Elixir is expressive enough (though the syntax definitely has some warts, like a couple random n-arity ‘functions’ as one example…) to do it as a library though. ^.^
Then you can mix-and-match as you want! :slight_smile:

It actually would be possible to make an LFE library that can make and consume Elixir macro’s though, unlike Alpaca. :slight_smile:

You could type LFE as well via a library for it.

Except Macro’s, Parse Transforms are indeed as powerful in erlang but just not as easy to use. Macro’s are where Elixir gets its power (a proper macro system in erlang would make choosing elixir over erlang far far harder with only the library ecosystem (because of elixir macros) being the deciding choice, I wish elixir was more erlang/beam compatible instead of making custom/new formats somehow…).

Precisely this. A good type system not only catches a lot of bugs (making entire classes of bugs just outright impossible), but it can also generate better code, make smaller and more expressive code constructs, and more! :smiley:

As stated above, just strengthening it alone will make checks better, which is always good of course, but you don’t get the gain in code generation efficiency nor better code constructs. That is why both of my TypedElixir and MLElixir experiments tries to ‘use’ dialyzer specs as much as possible, but also expanded above and beyond those. :slight_smile:

Oh I wish I had the time! It would be so fun to do!

4 Likes

Yes, I think the difficult problem is not to design and implement a decent strictly typed language on top of the BEAM/Erlang/OTP but to interface it with the outside world. Outside in the sense of everything else running in the same Erlang node.

There is no safe way to really protect yourself from bad calls and messages being sent to you. You could handle it the Erlang way and let it crash but that assumes you have an infrastructure around which knows what to do when you crash. And generally you only crash small bits. You would then have to be able to structure your language/system in such way so that if (when) something goes down you don’t take it all down. I don’t know how this could be done in such languages.

One serious issue with doing an Elixir 2.0 which is radically changed is backwards compatibility. People tend to get really worked up when their old code doesn’t work. :wink: Look at the problems with migrating from Python 2 to 3.

LFE can consume Erlang macros and record definitions and transpose those to LFE macros and use them. It works on most macros but some macros just aren’t transformable to lisp macros. It would probably be possible with Elixir macros as well but transforming the Elixir code to LFE is probably more difficult.

One problem I have with the Elixir compiler is that is doesn’t feel very open, for example I don’t know of any well-defined way to get the resultant Elixir code after all macros have been expanded. Yes, there are ways of doing it but not in the same easy way as with the Erlang compile where you just give it an option (‘P’) and out comes a file with the expanded code. This would help a lot when writing macros and when trying to understand other macros.

LFE is already typed in the same way in the same way as Erlang and Elixir so you can run dialyzer on it. And get the same results as running dialyzer on Erlang and Elixir. :smile:

Not surprising as that it why Elixir is implemented on top of BEAM/Erlang/OTP, to get access to all those features more or less for free. :smile:

While doing macros on top of Erlang is not really that difficult you have probably insurmountable problem and that is the syntax. Basically the macros transform function calls into other syntactic constructions. The problem is that function calls in Erlang really look like function calls so there is no real way to define new syntax. And parse transforms are about as powerful as you can get seeing you work on the whole module, but you can’t do anything about the syntax.

It is easier in Elixir because even though you are still transforming function calls the syntax is more lax and allows you to write things which are function calls syntactically but don’t look like it. The trivial example is of course def so

def foo(n) do something(n) end

is a call to the macro def of 2 arguments, but it doesn’t look that way.

2 Likes

I mean in a strong and static way. ^.^

Well Elixir compiles to erlang so that step would not be ‘too’ hard, issue is that you have to translate the erlang AST to elixir’s AST, which is a lot of busy work but I don’t see it being that hard to be honest, erlang is a very simple language. :slight_smile:

It all makes much more sense when you fully expand the macro’s, it goes down to base level calls. :slight_smile:

What is @josevalim opinion about statically typed Elixir ? Is it possible that one day we will see optional type checking mechanism introduced into Elixir compiler?

2 Likes

After reading the discussion here, I have roughly the same question as @tx8 – are there any actionable low-hanging fruits that are foreseeably doable without the doers hating their lives?

I’d love a statically typed Elixir but I understand there are a lot of practical hurdles and it’s most likely not on the roadmap. In the meantime, if us the Elixir devs want to level up our game, what options do we have?

1 Like

In my very uninformed opinion, no. There are no low hanging fruits here. You either go full static typing from the beginning or you’ll never get decent type inference for your program. Type inference is an all or nothing affair: you either have full decidability (e.g. Haskell or OCaml) or you have something like dialyzer. You can’t go from dyalizer to Haskell by incrementally adding more and more rules.

First, you need actual parametrized types, which I think dialyzer doesn’t support. You can’t have type signatures like the one from the List.delete/2 function: delete(list(), any()) :: list(), which expands into delete([any()] -> any()) :: [any()]. In this type signature, there is no relationship between the three types that appear. You’d need something like (pseudocode) delete(['a], 'a) :: 'a, which guarantees that the input and output list have the same type.

At the same time, there is a big problem with macros. Macros operate on the AST. If Elixir were to be strongly typed, the AST would have to be strongly typed as well. The most widely used decidable type system is probably the Hindley-Milner (HM) type system. Elixir’s AST can’t be represnted in its current for in that type system, because it doesn’t support heterogeneous collections. You’d need to define an AST type, with several constructors for the different types, but in such a system, writing macros is a completely different experience. t wouldn’t feel like writing Elixir as it is today. Again, such change might be worth it, but it won’t look like Elixir.

Write another language that compiles to the BEAM with easy (typed) FFI for Elixir and Erlang, uses mix/rebar as a build system and writes the function docs to the new documentation BEAM chunks. For now forget about reusing Elixir’s macros. If you need a macro system, write your own.

Alternatively, there is some new research on type checking algorithms for types that include untagged unions and intersections, which might allow you to have static types without the need for named constructors. The main problem is that type inference is much harder within this system.

1 Like

This is delete(list(a), a) :: list(a) when a: var in elixir specs.

2 Likes

Sorry, I should have reread the typespec docs before posting. In that case, I think some typespecs should be updated to use this

Not that it would make much difference in practice.

I love types, but here

Jose says the future might be Clojure Spec

https://clojure.org/about/spec

which seems sensible to me.

Yet again, Elixir and Clojure show their likeness…

1 Like

Did you try the overspecs/underspecs flags? Erlang -- dialyzer

Dialyzer is fine for the trade-offs it chose to make. Having something more strict most likely means rejecting code that is completely valid today. Or in the best case scenario, forcing types to be explicit and not relying on inference.

@tmbb is correct here. There aren’t low hanging fruits when it comes to types. Adding a type system involves many compromises and the compromises you are willing to make change a lot when the language already exists.

If you want to have type inference, then many Elixir idioms will no longer be supported. Elixir code is based on unions (it returns {:ok, _} | :error) and intersections (different clauses of a function) and those are very expensive for an inference engine.

We could add requirements, such as obligatory typing of inputs, but that doesn’t solve the problem that we have both typed and non-typed code and we should likely check the values at the boundaries, adding runtime cost.

If you want to keep the language the same and support all of the existing idioms and also not worry about conversion costs: that’s Dialyzer.

Long story short: we could have other type systems but you need to start from scratch and you will have to compromise in many ways.

Apologies but I did not mean that Clojure Spec is the future. It is unlikely we will have something like Clojure Spec in Elixir per-se, especially because we have typespecs and guards and adding a third option is going to bring only marginal gains. If somebody wants to explore something like Clojure Spec as a package, then please do!

7 Likes