Strong Typing vs Unityping

I know but it just makes it even worse. It is a conversation for another place maybe anyway.

What makes what worse?

Supposedly it makes you complacent with your code. I’ve heard that argument.

Uhh, does quite the opposite for me. With strong types I often think of how to define the types first, define the inputs and outputs of functions, all before I ever write a line of executable code that has to absolutely fulfill those contracts that I’ve set out.

Not with your code itself, but with how you use it. Because “Yeah if it compiles, it works

1 Like

I know nobody with that kind of mindset


I took the liberty to move this set of posts to a separate thread because we were starting to get more and more off-topic. So let’s continue the discussion here!

@DianaOlympos Your argument can be used just as strongly against testing: “Yeah, if the tests are green, it works”. Which is just as much hubris, of course.

So, let’s take a step back:

As far as I know, strong typing has two advantages over no typing (also known as ‘unityping, e.g. everything shares the same type and can be passed anywhere’):

  1. There is more information for the compiler or (bytecode) interpreter to allow optimalizations.
  2. It allows developers to specify more clearly how their code can be used by other pieces of code.

Yes, we live in a world that is impossible to fully prove mathematical without an infinite amount of axioms. Still, strong and dependent typing definitely let you reason more deeply about your code, and you can prove that a certain subset of your code is correct. That does not mean it will do what you want it to do, of course. The logical mistakes that happen when translating a problem from the human domain to the logical domain cannot be solved by a computer.

However, Strong typing will definitely let you get rid of implementation detail mistakes like segfaults, memory leaks, data races, NullPointerExceptions, ‘undefined is not a function’ etc. Also, many of the logical mistakes will return in something that the typechecker might catch: If we forget some computational step and for instance return a list of floats instead of ints, then the typechecker will notify us about that.

If you go the far end of the spectrum, dependent typing and formal verification is enough to ensure that your system is logically correct. This has been used, for instance, to create a microkernel for autonomous drones. The microkernel is proven to be hack-safe. Of course, mistakes by the verification team could still have happened, and there might also be hardware flaws that allow someone to circumvent the system, but this is as close we can get to ‘bug-free’.

It is not perfect, but it definitely is stronger than testing. (Do note that the domains of type-checking and testing do not competely overlap; still having tests is important, but in a strongly typed language, there are many trivial tests that are already handled by the typechecking.)

But even without all that:
The two reasons above remain great arguments for using strongly typed systems.


Erlang and Elixir wouldn’t be the same thing with strong types. (According to the Learn you some erlang book, there were attempts to create one without success.) In fact, live upgrades wouldn’t be possible, for example in cases when a GenServer’s loop data format changes (let’s say you want to upgrade the state from a tuple to a longer tuple or to a struct, which is a problem if your variables can hold only a single type of values).

For those who want at least the feeling of strong types, there’s dialyzer and dialyxir.

To clarify - Elixir is a strongly, dynamically typed language. Dynamic/static typing and strong/weak typing are different things. On the other side of the spectrum, there’s C that is mostly considered to be a statically but weakly typed language.

This SO answer explains this well:


Exactly @michalmuskala

The problem here is not about typing itself. It is about using Typing (especially strong static typing) as a way to prove your program. And the thing is it does makes it more “bug free”, in term of implementation bugs. And i agree with you @qqwy it is really close to testing in that idea.

The problem with doing that is that safety is not really affected by implementation bugs. Which is exactly why Elixir/Erlang are so good at producing fault tolerant applications. In case of bugs or security issues, we know what to do. Crash the infected part, reduce the blast area of the problems, and have a coping strategy.

But that is not the case for design issues. Specifications issues. And the research data is strongly showing that Safety and Security problems come from these types of issues. Far more than from implementation errors, especially in a good fault tolerant architecture.

On the other hand, Type Systems (used as a formal proof trojan horse) and TDD and formal proofs do build a false confidence in the fact the system work well. Which makes you forget that in the grand scale of things, this is not the hard and dangerous part.

Hence why i love type systems, even as a trojan horse for formal proof, but i advocate to not focus on them too much. They are the tree hiding the forest.


I think it is entirely possible, just that it is fool-hardy to do something like type a PID, messages should be black-boxed and have to be matched on to extract data.

As for live upgrades that is also entirely possible. You could do what the BEAM does not and let each process keep running with it’s own code, or you could let them code_upgrade to translate the data from an old format to a new (through again that black-box parsing would be perfect and entirely generic), to a variety of others ways, all of which are used now with code upgrades.

That is Success Typing, not Static typing; those are two very very different things.

As a very simple example, with strong typing I can make a sum type that has only, say, 2 options, which let’s call Red and Green, and the sum type is named BasicColors, and I can case match on a BasicColors somewhere, matching out the Red and Green and that all seems normal. If however I add a new constructor to the BasicColors sum type called Blue, then I can rightly get either warnings or errors at every place that I’m not checking for it, the compiler let’s me know that I’m not missing anything. Dialyzer can check some of those instances, but it cannot check all.

Yep, I think of it like this (we still need a table plugin for this forum…):

  • Strong Static: A binding’s type is known at compile-time and it cannot be ‘cast’ to others, a conversion has to be done. Examples: OCaml, Haskell
  • Strong Dynamic: A binding’s type is not known at compile-time, however it is ‘defined’ at run-time and thus cannot be cast to another type, a conversion has to be done. Examples: Python, LUA
  • Weak Static: A binding’s type is known at compile-time, however it can be auto-cast to something else or the bits re-interpreted. Examples: C, C++, Pascal
  • Weak Dynamic: A binding’s type not known at compile-time or even run-time in some cases, you can cast a string to an integer and back again with ease. Examples: VisualBasic, Javascript

When I speak of Static-Strong typing I am 95% of the talking about Hindley-Milner style typing or even stronger (like Refined or Dependent typing).

This I think is one of the strongest arguments ‘for’ HM typing because you have to think about your inputs and outputs ‘before’ you write code of any decent complexity, have to think about the API, about how the and what information will be held.

1 Like

Is there a term for typing as used in statically typed functional languages which pithily sums the advantages up?

If not, can we come up with one, that conveys the advantages?

I.e. 100% of the code in a language can be typed (Elm, Haskell etc.) versus cases where far less can be usefully typed (any OO languages where void is implied or side effects-abound and are hidden from the type definitions).

I feel that’s why types are so useful in functional languages (and often come for free with inference) while they were looked down upon in Ruby etc.

Here’s some suggestions:

‘functional typing’
‘precise typing’
‘high coverage typing’
‘full disclosure typing’
‘inferred typing’
‘pure typing’
‘theorem typing’ (ala Curry Howard)

I think the most traditional name for it is Hindley-Milner Typing as it is one of the most simple yet robust type systems out (and the one that Haskell, OCaml, and most others use). Many statically strong type systems are just further refinements and additions on top of it (since HM Typing uses inference where most other type systems do not).

There are of course others, but that is the most common one.


not v catchy, but spot on, of course : )

My personal choice would probably be something like ‘accurate typing’.

Types /accurately/ describe everything.

Actually how does OCaml’s type system feel versus Elm for example? I suppose you just bias against too many side effects / mutable state etc. and keep everything hunky dory?

They are both HM Typing with extensions.

OCaml’s extensions are primarily it’s Module system (the m in it’s name), which allows for fun dynamic code generation entirely type safe. OCaml has others as well (row typing for record objects and such) but that is the primary big one that is unique among all HM languages (it’s module system is significantly more powerful than SML’s for example).

Elm’s main extension (really it’s only one I think) is it has some kind of ‘fake types’ that pretend to be typeclasses like number (matches both integers and float) and comparable (that matches a lot of stuff), however Elm does not do it in a type safe way and it’s extension to HM is entirely Unsound. For example that allows you to basically ‘cast’ anything to anything (unsafely in a javascript context) by using them:

import Html

intId : Int -> Int
intId x = x

toInt x =
    if x > 0
    then intId x
    else intId x

Then you can use ‘toInt’ to cast to anything:

banana : Int
banana = toInt "banana"

So yeah, now you have an Int that is "banana". There is also this bug:

bug : comparable -> Bool
bug n =
    n == 1

Which breaks like:

import Bug exposing (bug)

bug 1
> True

bug 'a'
> False

bug ""
> False

bug "asdf"
> False

bug []
> False

bug (0)
> True

bug (1)
> False

Among many many other typing bugs all over their issue trackers.

Elm’s type system is fairly sound except for its fake typeclasses that should not exist.
In addition to the type unsoundness, the Elm compiler has a lot of bugs in general, ones that hit people routinely, like dozens of bugs not compiling to sensible javascript ( ) to far more.

OCaml on the other hand is fairly rock solid for decades now, and any language features go through a bit of hell to get added and are only added if not emulatable in some other readable way and that it is entirely sound and so forth (Implicit Modules will be the next big language addition most likely, and it’s been years incoming).

1 Like

Except it is not the source of the problem at all.

The problem are that even if you think about it, you have no control about who is going to use your API. You can make it the best ever, that will change nothing.

Plus… You can not escape the errors created by hardware or the layers behind you.

You are even a good example here. You think that a great argument for that type of type system is that they make you think about your input and output when this is far from being the root of the problem.

1 Like

Maybe it is correct to say that strong typing allows you to be more explicit than weak typing in your APIs. This is for me personally the main reason I like working with strongly typed languages.

I completely agree with you, @DianaOlympos, that a typed system is not an adequate replacement for the fault-tolerant resilience that supervision tree-style concern segregation allows you to have.


What problem?

Change what though? /me is confused about what is being discussed…

One big thing that it would change over unityping is that the contract that ‘my’ library gives to the user I can at least say that I will full-fill such-n-such contract and that the compiler makes certain that I do not do stupid crap like returning a value when I meant to return a {:ok, value} or so, it is meant to help me from making my mistakes. Sure you could type everything to take and return Dynamic types, but then I have to add typing tests to my tests in addition to testing the logic, which balloons them and may still not catch all cases that the compiler could do for me with ease.

Uh, I’m unsure what that has to do with a typing system? Static/Strong Typing vs Unityping should not depend on the hardware or anything else that exists beyond compile-time at all?

Root of what problem? The problem I’ve been speaking of is contractual and thinking about your inputs and outputs is the very essence of ensuring contracts at API bounds, not necessarily to verify the ‘user’ uses them as you state but that you yourself fullfills the contract that you state you are going to fullfill in a passing data sense, logical contracts are an entirely different issue (though typing can help with that as well, can prevent you from doing something stupid like multiplying a kilometer and a mile).

Very much this, the typing specifies a contract that I am promising to my users that I will fullfill (if I do not, it is a bug). How it is implemented internally is a separate matter, whether it is OOP, functional, stack calls, OpenCL, actor model, whatever, none of that has anything to do with typing.

I’m still confused about what is being talked about in regards to supervision tree concern segregation and what that has to do with static/strong typing vs unityping. ^.^;

How would you even apply such a thing to a static type system anyways? I think I need an example of that to understand what is being talked of… ^.^;