Types 'n' Testing

Gleam is statically typed.

It’s hard for me to extract an actual argument from your comments because they seem to be strongly held opinions and are at places non-factual.

I think you are discussing about a feature that nowadays compilers have, full type inference. The idea is not far away from how dialyzer success typing works, but it is more constrained. I think an example is in order to make it more clear:

Example 1:

a = 30
b = add_three(a)

# This should display an error at compile-time, because the inferred type is number
List.first(b)

# This also should display an error at compile-time, because from operation number + 3 can be inferred that the parameter is a number
add_three("sad string")

fun add_three(number) do
  number + 3
end
1 Like

You asked for an example and I provided the closest example that exists for BEAM languages where type inferencing is strictly better than static type declarations.

@dimitarvp You’re arguing on an unfair basis and purposely missing the point.

Gleam uses type inferences to obtain strong typing which is the point I am trying to make as something similar to but not identical to what I am describing (as I also want value constraint checking).

In Gleam, type annotations or declarations are not required, the types are not static in the sense they don’t have to be statically declared by the developer and even if used, they don’t actually influence the type inferencing whatsoever, the compiler knows despite what decorations you use. The code is therefore amenable to a change in types without refactoring because it’s not an echo chamber of repeat yourself static type declarations on everything.

It is telling that you have expressed numerous times that you live in a land of refactoring hell and your reaction was one of incredulousness that refactoring was not of any consequence to me within my development.

Understand that brittleness actually emerges from static type declarations and it begets more code and a lot more code churn. Add throws onto things, async and all the type ceremony that inferior languages do and sure, life is a refactoring nightmare. I avoid these tarpits and I don’t want Elixir sliding down this slippery slope into the tar either.

I don’t think I have said it enough, as clearly it did not get through:

I do not want to be reminding the compiler what the type or value constraint is all over my code, I want strong typing and value constraint inferencing without the developer overhead or effort of manually maintained static types.

You disagree with the above it seems.

I simply want more checking than you, for much less effort and I do not want the Elixir language being ruined through well intentioned but brittle static type declarations which you seem to think is the only way.

Elixir typespecs and specs are already an eyesore and overhead that I cannot wait to be rid of with type inferencing.

I guess it’s a problem of differing terminology but I don’t see what Gleam has to do with strong typing as it compiles to Erlang (strong typing) and JavaScript (weak typing), at least in the traditional meaning of the terms. But the type inference in Gleam is not complete and the compiler asks you to add types if it can’t deduce what you are doing.

If we look at some real Gleam code I wrote: src/glemplate/renderer.gleam · 5fae861c8649506405e88c5b79214979a20c8b85 · Mikko Ahlroth / Glemplate · GitLab

We can see that mostly the types can be left out in two cases: variable assignment and return values. These can usually be inferred by the compiler, but it’s still very much static typing, since static typing is not an alternative to type inference (Gleam has both).

If you want type inference that can guarantee that the compiler knows what types your variables / arguments / return values are, then it sounds to me that you want static typing with type inference, like Gleam or TypeScript do. I’m not sure type inference can meaningfully be done without static typing – we have surely seen with Erlang and Elixir that Dialyzer is not enough, and misses very obvious error cases (as it just doesn’t have enough information).

3 Likes

I have never said that I “live in refactoring hell”. Why do you feel the need to invent stories to make a point?

This is the case in Rust and OCaml at least and other newer languages are going the same route.

You seem to be saying “I don’t want languages without type inference” which is OK. Who is fighting you on this and why do you feel the need to emphasize it as much? :thinking:

He we will have to agree to disagree. You have (1) repeatedly ignored an important discussion point that I made – namely that you are putting two techniques on an arena and are looking at which one is better, and I asked why not use both? – then (2) you go on lengthy diatribes claiming your opinions as fact, and finally (3) you say I argue in bad faith.

I mean OK, if it makes you feel better. ¯_(ツ)_/¯

I will disengage because this is not productive at all. You don’t seem to be responding to me, you are responding to things that bothered you long before you engaged in this thread. And you fixated on me as if I am everything wrong with programming these days. :sweat_smile:

Yep I know what type inferencing is, and I agree with @Nicd that it’s difficult or impossible to achieve it without actual static typing.

The thing that it compiles to doesn’t matter, the actual enforcement and type inference happens inside the gleam → erlang compilator, so the only question is how constrained the language is.

It’s impossible, the best you can get is what dialyzer currently offers. I think the argument shifted at some point from arguing about the usefulness of static typing, to the boilerplate the classical static typing implementation creates in function definitions.

On further thinking you are of course correct here. :slight_smile:

1 Like

That boilerplate is very overrated at least in Rust and OCaml (can’t remember about Haskell). Gleam seems quite concise as well.

To call adding a single word to a function argument “boilerplate” is a bit unfair.

There is no need to employ personal attacks here. It’s frustrating when people don’t get your point, but please remain respectful.

If the current compiler technology is capable of doing this without you having to write signatures to every function you do, then I would always go for inference over defining the type in the signature. You don’t lose any benefits of static typing and at the same time you don’t have the verbosity of redefining types that people hate.

1 Like

I am referring to your post arguing for the virtues of declaring static types on everything to benefit refactoring:

And this reaction when I said refactoring really isn’t an issue and very manageable due to sound structuring:

Hence your world vs mine.

I don’t live my life in my editor refactoring type declarations and I don’t want to.

I think of functional programming as being algorithms (mostly reducers over a state) and that they really don’t care what the type is, as long as there is a pattern match on the function or operator (which is still a function pattern match). It’s very much about value semantics and constraints, not static types.

The last thing I want to happen in our ecosystem is getting caught up on manually managing static type hierarchies, type unification, and all the “fun” that brings. Sometimes I may have to pass a Dog, sometimes a Poodle, and sometimes a Gorilla. Do I need to list each of these types everywhere or do I create a “generic” Animal type? Is an Amoeba an Animal too? What about Plants?

I say N-O to where this ultimately leads, and yes I am resisting static type declarations in favor of type inferencing in the hope others can see value in such an approach too rather than thinking classic static types is the only way.

I am saying there is another way through inferencing which can avoid many of the problems of declaring static types.

That is the point, I don’t want manual static typing which is what you seem to be arguing for, and which leads to the maintenance problems I have articulated.

It is quite ok.

I accept that we both have different viewpoints and a significant difference of opinion even though we both desire better compile time checking. That much we can agree on I think.

Yes, and there are languages doing it. Best of both worlds, right?

1 Like

I never wrote in a language that has powerful type inference, however people seem to praise highly languages like Elm. I have a feeling that this magic should have some drawbacks in practice, especially when we add things like polymorphic functions into the equation.

Rust and OCaml. As far as I am aware – with Rust first-hand experience of 3 years and something – both are pretty powerful.

Though in OCaml’s case I’d recommend much more strongly to not overdo the not typing out the types thing because with parametric polymorphism it can get confusing and cryptic really fast.

I don’t think rust can be classified in one of those languages with powerful type inference, it is better than in languages like kotlin, c#, java, c++, however it is still very verbose.

I think this is one of the pitfalls of having powerful inference, sometimes you can get some errors that you wouldn’t possibly understand, this is why tooling for guiding you around this is also important.

I wouldn’t mind having this full inference in elixir, however the way it is done in rust, I would rather have that as an optional feature, basically I want to write code in elixir like I did before and have this magical type inference at compile-time.

1 Like

The only fundamental driver for manual type declarations is where the compiler does not actually have the information to infer the type (or value) constraint. In such cases it is not optional and is actually required for soundness.

This is however quite a distinct use case from exuberant festooning of the code base with static type adornments.

So much for my not wanting this to turn into a static vs dynamic typing thread :sweat_smile: I knew it was likely inevitable and I’m still learning some stuff! Also, thanks to @adw632 and @dimitarvp for pointing out NimbleOptions earlier, I was unaware of it.

2 Likes

Yes, and that pathological case is quite rare in my commercial Rust experience.

The Elixir type work seems to be heading in the right direction, the initial implementation will not support any type annotations and infer what it can using just function guards. It will be a long road.

Whilst they do introduce a type annotation in the paper it is used for expressing set theoretic type concepts not a language syntax for implementation.

Paper here https://arxiv.org/pdf/2306.06391