Strong Typing vs Unityping

And they are awful for safety exactly because of that, but this is a completely different problem :stuck_out_tongue:

Great for safety, just anytime types may change between versions then be sure to encode it at those boundaries into a format that is consistent (JSON is popular nowadays for example).

What i mean is that they give you reliable software, but they give you a wrong impression of safety (ie your software is not going to create a loss accident, or you are never going to have problems because it is proved by the compiler/type system).

It push towards a feeling of false confidence.

And i say that from the pov of someone that love type system and that think they can bring a lot, especially around tooling. But we need to stop pushing them as the way to avoid “issues” and “bug”. They are not.

Depends on how detailed the type system gets, like Idris goes way detailed dependent typing and all. ^.^

1 Like

I know but it just makes it even worse. It is a conversation for another place maybe anyway.

What makes what worse?

Supposedly it makes you complacent with your code. I’ve heard that argument.

Uhh, does quite the opposite for me. With strong types I often think of how to define the types first, define the inputs and outputs of functions, all before I ever write a line of executable code that has to absolutely fulfill those contracts that I’ve set out.

Not with your code itself, but with how you use it. Because “Yeah if it compiles, it works”

1 Like

I know nobody with that kind of mindset

3 Likes

I took the liberty to move this set of posts to a separate thread because we were starting to get more and more off-topic. So let’s continue the discussion here!


@DianaOlympos Your argument can be used just as strongly against testing: “Yeah, if the tests are green, it works”. Which is just as much hubris, of course.

So, let’s take a step back:

As far as I know, strong typing has two advantages over no typing (also known as ‘unityping, e.g. everything shares the same type and can be passed anywhere’):

  1. There is more information for the compiler or (bytecode) interpreter to allow optimalizations.
  2. It allows developers to specify more clearly how their code can be used by other pieces of code.

Yes, we live in a world that is impossible to fully prove mathematical without an infinite amount of axioms. Still, strong and dependent typing definitely let you reason more deeply about your code, and you can prove that a certain subset of your code is correct. That does not mean it will do what you want it to do, of course. The logical mistakes that happen when translating a problem from the human domain to the logical domain cannot be solved by a computer.

However, Strong typing will definitely let you get rid of implementation detail mistakes like segfaults, memory leaks, data races, NullPointerExceptions, ‘undefined is not a function’ etc. Also, many of the logical mistakes will return in something that the typechecker might catch: If we forget some computational step and for instance return a list of floats instead of ints, then the typechecker will notify us about that.

If you go the far end of the spectrum, dependent typing and formal verification is enough to ensure that your system is logically correct. This has been used, for instance, to create a microkernel for autonomous drones. The microkernel is proven to be hack-safe. Of course, mistakes by the verification team could still have happened, and there might also be hardware flaws that allow someone to circumvent the system, but this is as close we can get to ‘bug-free’.

It is not perfect, but it definitely is stronger than testing. (Do note that the domains of type-checking and testing do not competely overlap; still having tests is important, but in a strongly typed language, there are many trivial tests that are already handled by the typechecking.)

But even without all that:
The two reasons above remain great arguments for using strongly typed systems.

4 Likes

Erlang and Elixir wouldn’t be the same thing with strong types. (According to the Learn you some erlang book, there were attempts to create one without success.) In fact, live upgrades wouldn’t be possible, for example in cases when a GenServer’s loop data format changes (let’s say you want to upgrade the state from a tuple to a longer tuple or to a struct, which is a problem if your variables can hold only a single type of values).

For those who want at least the feeling of strong types, there’s dialyzer and dialyxir.

To clarify - Elixir is a strongly, dynamically typed language. Dynamic/static typing and strong/weak typing are different things. On the other side of the spectrum, there’s C that is mostly considered to be a statically but weakly typed language.

This SO answer explains this well: https://stackoverflow.com/a/2351203/2919320

5 Likes

Exactly @michalmuskala

The problem here is not about typing itself. It is about using Typing (especially strong static typing) as a way to prove your program. And the thing is it does makes it more “bug free”, in term of implementation bugs. And i agree with you @qqwy it is really close to testing in that idea.

The problem with doing that is that safety is not really affected by implementation bugs. Which is exactly why Elixir/Erlang are so good at producing fault tolerant applications. In case of bugs or security issues, we know what to do. Crash the infected part, reduce the blast area of the problems, and have a coping strategy.

But that is not the case for design issues. Specifications issues. And the research data is strongly showing that Safety and Security problems come from these types of issues. Far more than from implementation errors, especially in a good fault tolerant architecture.

On the other hand, Type Systems (used as a formal proof trojan horse) and TDD and formal proofs do build a false confidence in the fact the system work well. Which makes you forget that in the grand scale of things, this is not the hard and dangerous part.

Hence why i love type systems, even as a trojan horse for formal proof, but i advocate to not focus on them too much. They are the tree hiding the forest.

2 Likes

I think it is entirely possible, just that it is fool-hardy to do something like type a PID, messages should be black-boxed and have to be matched on to extract data.

As for live upgrades that is also entirely possible. You could do what the BEAM does not and let each process keep running with it’s own code, or you could let them code_upgrade to translate the data from an old format to a new (through again that black-box parsing would be perfect and entirely generic), to a variety of others ways, all of which are used now with code upgrades.

That is Success Typing, not Static typing; those are two very very different things.

As a very simple example, with strong typing I can make a sum type that has only, say, 2 options, which let’s call Red and Green, and the sum type is named BasicColors, and I can case match on a BasicColors somewhere, matching out the Red and Green and that all seems normal. If however I add a new constructor to the BasicColors sum type called Blue, then I can rightly get either warnings or errors at every place that I’m not checking for it, the compiler let’s me know that I’m not missing anything. Dialyzer can check some of those instances, but it cannot check all.

Yep, I think of it like this (we still need a table plugin for this forum
):

  • Strong Static: A binding’s type is known at compile-time and it cannot be ‘cast’ to others, a conversion has to be done. Examples: OCaml, Haskell
  • Strong Dynamic: A binding’s type is not known at compile-time, however it is ‘defined’ at run-time and thus cannot be cast to another type, a conversion has to be done. Examples: Python, LUA
  • Weak Static: A binding’s type is known at compile-time, however it can be auto-cast to something else or the bits re-interpreted. Examples: C, C++, Pascal
  • Weak Dynamic: A binding’s type not known at compile-time or even run-time in some cases, you can cast a string to an integer and back again with ease. Examples: VisualBasic, Javascript

When I speak of Static-Strong typing I am 95% of the talking about Hindley-Milner style typing or even stronger (like Refined or Dependent typing).

This I think is one of the strongest arguments ‘for’ HM typing because you have to think about your inputs and outputs ‘before’ you write code of any decent complexity, have to think about the API, about how the and what information will be held.

1 Like

Is there a term for typing as used in statically typed functional languages which pithily sums the advantages up?

If not, can we come up with one, that conveys the advantages?

I.e. 100% of the code in a language can be typed (Elm, Haskell etc.) versus cases where far less can be usefully typed (any OO languages where void is implied or side effects-abound and are hidden from the type definitions).

I feel that’s why types are so useful in functional languages (and often come for free with inference) while they were looked down upon in Ruby etc.

Here’s some suggestions:

‘functional typing’
‘precise typing’
‘high coverage typing’
‘full disclosure typing’
‘inferred typing’
‘pure typing’
‘theorem typing’ (ala Curry Howard)

I think the most traditional name for it is Hindley-Milner Typing as it is one of the most simple yet robust type systems out (and the one that Haskell, OCaml, and most others use). Many statically strong type systems are just further refinements and additions on top of it (since HM Typing uses inference where most other type systems do not).

There are of course others, but that is the most common one.

2 Likes

not v catchy, but spot on, of course : )

My personal choice would probably be something like ‘accurate typing’.

Types /accurately/ describe everything.

Actually how does OCaml’s type system feel versus Elm for example? I suppose you just bias against too many side effects / mutable state etc. and keep everything hunky dory?