Full static type inference of set theoretic types

That’s an excellent question and I ask myself the same. I believe we do want to provide mechanisms for the developers to know and/or enforce if arrows are strong or not, but it is yet unclear what those mechanisms should be.

12 Likes

That is correct, the paper describes _ return type being the best approach for gradulized types. It is better to let the compiler type system infer the types and strong arrows using guards as there a number of cases where you can miss out on the benefit of type materialization if you specified the type as a() or b() and eliminated the flow of dynamic, e.g. ```integer() and dynamic()``. This relates to part in Jose’s talk about his t-shirt being allowed to be grey and something else.

Just watched the talk, the strong arrow idea is great.

I fear I have code with very few strong arrows or maybe I don’t.
Would it be feasible to bring some tooling that tells us something about our code, how it will behave?
Is the first point on the roadmap “infer types from patterns and guards” maybe that?

That is my understanding, no annotations in the first release, then after that, type annotations on structs.

Not sure if you’ve listened to this, but since Roc was mentioned, there’s this podcast episode where the author of Roc hosts Jose for a discussion on the implementation side of things around the type system ‎Software Unscripted: Static Types in Elixir with José Valim on Apple Podcasts

Not sure if the details have stayed the same since then (~1yr ago) but I thought there’s some nice insights in there

2 Likes

There is no code with strong arrows today, because a strong arrow must be statically typed (and nothing has a static type today). The idea, however, is that as we make Elixir itself statically typed (and the underlying Erlang functions), then we may find type checking violations in your untyped/dynamic code.

On your side, what you could do is to write patterns and guards, which is idiomatic Elixir code anyway, and this will help the VM optimize code and the future type system to find bugs.

4 Likes

Sure I understand that.

But when I have functions like these today

def foo(x) when is_integer(x)

def bar(x)

And I’d add

$ int() -> int()

when types are there, I’d have a strong arrow for foo and a “weak” for bar, right?
So thats what I mean. I have no good intuition how many of my functions would qualify for strong arrows.
Would be helpful to have sth early on to get a better view of to codebase in this context.

It depends. If bar uses other strong arrows functions, then you may get the property by transitivity.

But yes, how to discover and declare if something is a strong arrow or not is something we will tackle/expose at some point. Too early to say precisely how though.

2 Likes

Excuse my ignorance José, but I am really curious after watching your talk: the expanse of type checking will likely be proportional to its impact on compile times. Clearly, this is an important concern. At the same time, there is little point in introducing a type system without exhausting its type-checking potential.

My question therefore is: Is it theoretically feasible to decouple compile-time type checking from compilation proper? Could we conceivably have v1.15-like-fast compilations, with the option to compile with type checking enabled, when and as often as desired?

4 Likes

Maybe that’s an option we can consider if the performance cost is too high but I would prefer to avoid that as much as possible, because then it would only be partially enforced.

8 Likes

Just listened to the podcast, it is great and it tackles on the topic of this thread absolutely great.

I am wondering, will you still be able to run applications that would yield type errors or it will throw a compilation error? I know it doesn’t make much sense from the logical point of view, but I like how Roc allows to run programs that have some kind of errors in them.

This means that each function and each variable will have a type assigned. And this type might be a complex expression, like or consisting of N clauses. To check that a type inferred from function body is a subtype of a type specified in the type spec of a function, compiler must compare these two expressions which is a turing-complete operation and can (and will) consume a lot of time.

Therefore compilers use different approaches like speculative complexity reduction (for example in dialyzer type expression @type enum() :: :x1 | :x2 | :x3 | :x4 | :x5 | :x6 is reduced to @type enum() :: atom()).

And in my honest opinion, this type precision aim taken by type system team will create a lot of complex types, and type-checking would either consume a lot of time or will make developers handle situations which are not going to happen, caused by complexity-reduction approach. This is the problem caused by “paper first” approach taken, where theoretical ideas were developed before the actual implementation.

2 Likes

This optimization is sacrificing the precision a lot, moreover I am not sure that is what happens actually, because if that were the case this kind of syntax is just misleading and causes more harm than good.

I think the approach with the paper is correct, the aim of the paper is not to prove that this can be implemented efficiently, however it proves that it is possible, a preliminary step that proves mathematically that the idea makes sense, as this seems a topic that is both very complex and doesn’t have a lot of practical information in the wild.

This approach also works in full favor of the developer, just by saying “we want this experience for our developers” and not by going from a performant implementation backwards. I think in the development world there two opposing philosophies:

  1. Make the language highly optimized, by introducing a lot of syntactic complexity. People seem to praise languages like Rust a lot these days, however in the field that I work, mainly development of applications for businesses and end-users, this language offers some tradeoffs I am not interested in, the development time and complexity is much more important to me than that performance gain.
  2. Tailor the language to be developer friendly, witch seems the direction that elixir and erlang took, a simple example would be how constrained is the message passing between processes and mailboxes, that makes them easy to reason about in more complex applications for humans, but there is a clear performance penalty.
3 Likes

@josevalim In the talk you present options 1, 2 and 3, and then explain what strong arrows are, but that Elixir will fallback to one of the options when it can’t work. But you don’t tell which one. Is it still undecided ?

2 Likes

This is unfortunately how dialyzer currently works with union types, although the actual limit seems to be 14 and not 6.

3 Likes

Seems like a memory efficient bloom filter would be a better approach to union and intersection of sets than a complete generalization.

Type errors will initially be emitted as warnings (and potentially for quite some time).

There is an implementation of set-theoretic types in the CDuce programming language, which already employs many optimization techniques we can lift from (some discussed in this paper). Given we took a “paper first” approach, I recommend reading the existing literature if you want to speculate on the implementation details. The literature exists precisely so we don’t have to rely solely on opinions.

I also don’t rule out the possibility of implementing the data types of the type system in a native programming language, such as Rust. Again, it is not my first option, I would prefer Elixir, but the type system is powered by different types of sets which define union/intersection/negation, which we could implement in a native language with the ultimate goal of performance. The traversal of AST and the application of the typing rules would still be done in Elixir.

In any case, it is either we do the paper-first, and then later find out it is too complex to implement, or we do implementation-first, and then find out the implementation is incorrect. If it fails, both end-up in the same place: we don’t have a type system up to our requirements. I already tried the implementation first approach back in 2018 and I know exactly where it ends: with you wondering if what you are doing is, by any chance, correct.

Excellent question, I was waiting for someone to ask it. :smiley: We will start with option 1: we will fallback to dynamic. However, I can see us also doing option 3 in the future, if we can measure places where introducing the runtime checks will not decrease or will potentially improve performance.

10 Likes

Thanks! I guess it has been suggested already but that would be cool if the compiler could emit a notice in such cases and suggest guards to add.

2 Likes

My criticism is based on experience of existing implementations of type systems. Gradualizer, dialyzer and eqwalizer didn’t suffer from inexpressive type system (IMO they were expressive enough), but they’ve suffered from complexity of type inference and subtype checks. Dialyzer and gradualizer reduce complexity and eqwalizer is just slow (though it is written in Scala, not in Elixir). And original paper of typing in Elixir states nothing about optimizations of type checking. Approaches suggested in the paper work only in certain cases.

Generally speaking existing type-checkers have suffered from a problems caused by complexity explosion. For example, this code Enum.random(for x <- 1..n, y <- 1..n, when x != y, do: {x, y}) will have a n * n - n values with naive type inference. And I don’t see how this can be expressed in set-theoretic type system without enumerating all available values. Complexity reduction to {x :: 1..n, y :: 1..n} will create a false positive in 1 / (x - y). Dialyzer took a “no false-positive” approach, so this won’t be raised as a type error.

And this “no false-negative” approach is a problem leading to community split, where some people will refuse to use working dynamic code with false-positives, and some people won’t use statically typed code with unnecessary runtime checks.

If I were to take the paper-first approach, I would address problems related to the performance of type-checker and complexity of algorithms first, and only once they’re solved, I would incrementally add features to the expressiveness of the type system. Especially, since this type-system takes the “no false-negative” approach

Anyway, I would like to participate in the development of the type system, but I have absolutely no free time. But I will definitely read the code of the type system development branch (if there will be one)

Sure, and by mentioning CDuce and the paper, I hope they can now be used as additional examples to expand your experience. There are also many smaller trade-offs each of those implementations make that directly impact performance (inference vs non-inference, local vs global analysis, etc).

Which cases? Can you please elaborate? Because the paper is about set-theoretic types and implementation of set-theoretic types, so I would be surprised if the suggestions listed there do not apply to the set-theoretic type system we are implementing.

Correct, but it is yet not decided that integers will be singleton values in our type system. Dialyzer chose singletons because it aligns with their trade-offs but it is unclear if this choice would be beneficial to us (for both performance and false positive reasons). We may choose between integer(), pos_integer() or zero() or neg_integer(), or each integer as a type.

My point is: sure, the type system can still be slow at the end, but extrapolating from X, Y, and Z, to draw conclusions is still just a guess. So let’s be aware of pitfalls, do the work, and then make decisions.

Again, there is no point in trying to optimizing something if we are not sure it is going to fully suit the language. I understand if you’d prefer that, but it is not my preference.

In any case, you are more than welcome to try a different direction! I’m looking forward to insights you might have and challenges that may arise when implementing the existing algorithms and optimizations on the Erlang VM. :slight_smile:

8 Likes