Haskell

This would be really cool here!

Many people do write mli’s first, and I’m unsure what editor support is being referenced, every editor I’ve used uses OCaml’s Merlin, which does indeed catch mismatches fine?

As well as mli files are entirely optional, they just let you ‘refine’ the public interface for a module is their primary purpose. ^.^

Yes it is an old holdover as well, one chosen for speed reasons. :slight_smile:

Yeah I’m amazed at how many it has… >.>

Want to talk a bit ? :stuck_out_tongue: because i think there is something more interesting here than static typing stuff and that could work on the BEAM too…

And yes i know Joe would get crazy happy :stuck_out_tongue:

Heh, it is not one of the languages that I know well, but I know it be reputation, and it had a lot of fascinating design decisions, especially its contract design was fascinating (and trying to kill off null pointers, whoo)! To be able to start creating something about it I’d need to learn just how it does stuff first (not necessarily the language, but rather the design decisions and why they were chosen and their shortcomings and such).

1 Like

Like?

~~

Where do you see the differences between Haskell and F-Sharp?

1 Like

F# is not pure, while Haskell is. This is a major difference, which dictates the way you (can and should) write code.

Haskell struggles from the following:

  1. The prelude (the part of the standard library that is imported everywhere by default) contains:
  • Some partial functions, like hd, which will make your app crash when given a case they do not handle. Partial functions should be avoided in your serious applications everywhere!
  • The default data type for strings is a linked-list of chars, which, while easy to introspect for newcomers, is a lot slower than using dedicated datatypes like Text or Binary.
  • Many functions work with lists, whereas they could instead work with any Functor (or Applicative or Monad, depending on the function). For instance, there is map and fmap.
  1. The Applicative typeclass was only found after Functor and Monad already existed, resulting in a lot of semi-duplicate functions, and old functions that required a Monad while they only actually needed an Applicative, etc.
  2. Historically, many people imported functions from other contexts in their current context. Nowadays, the preferred style is to (almost) always use a qualified import (they are like alias in Elixir), which is a lot more readable. Unfortunately, much code in the community still follows the older approach and is therefore harder to read.
  3. Hackage contains two types of libraries:
  4. Stable libraries that are absolutely wonderful and well-documented.
  5. A lot of abandoned ‘proof of concepts’ that never were maintained afterwards.
    Finding (1) inbetween (2)'s is hard.

That said: You should definitely learn Haskell and play with it. Every language has its warts; and that is fine. Haskell is definitely a marvelous language and excels in versatility, Domain-Specific-Language building, and State-Transitional-Memory-based programming.

5 Likes
  • Haskell is pure functional language build with no compromises (pure doesn’t mean it always practical).
  • F# functional langue build on top .NET VM -> it can use libs from .NET but it also has limitation from .NET (as far I know F# does not have type constructors -> types that could construct other types)
  • The same Scala functional language build on top on JVM virtual machine -> it can use libs from Java, but it also has limitations from JVM (I don’t like syntax of Scala)
  • There is also http://eta-lang.org/ Haskell on JVM
4 Likes

String types! Definitely string types. :slight_smile: When your app depends on libraries that uses different types to represent strings, you’re bound to encounter loads of conversion functions (not to mention fromStrict and toStrict!).

3 Likes

A friend of mine challanges me to implement a simple in place sorting algorithm which does the following: “In each pass, it will find the smallest and the greatest item in an array of items.”

This seems relatively easy to me. Well, the Haskell guys in #haskell seem to struggle with it a bit, since they are not used to such a technique, which is quite surprising to me.

Is it really that much difficult, to do such a simple task in functional languages?

I am aware that in place sorting is by its specificiation an imperative style, he means this implementation avoids memory overhead and is faster.

And even with an “out of place” sorting algorithm seems the Haskell community overchallenged.

Inplace sorting requires mutability, and mutability doesn’t fit into FP.

In the BEAM languages you can’t even circumvent immutability, its strictly enforced by the runtime.

Haskell does know some mutable datastructures, but I never worked with them, but I worked with immutable arrays once. They were clumsy to use and they felt foreign.

In F# there is the mutable keyword, but as I saw it in code I refactored it into a recursive function on site. Its a dangersign for me…

In Haskell an out of place sorting is usually exampled using qsort:

qsort [] = []
qsort (x:xs) = let (lesser, greater) = partition (<) xs in
  qsort lesser ++ [x] ++ qsort greater

Thats not optimized, not even tested, but I do remember something like this from the introductionary slide in my FP course 4 years ago…

2 Likes

That’s also not ‘technically’ a quicksort, more of a divide-and-conquer sort. ^.^

1 Like

Hey, thanks a lot ^-^

And is it true, that in place sorting avoids memory overhead and is faster?

So, how does this look in practice? Is functional programming unsuitable for sorting?
Is imperative programming always more performant here?

And does your code “find the smallest and the greatest item in an array of items”?

:slight_smile:

And is it true, that in place sorting avoids memory overhead and is faster?

Bogosort is always slow… Regardles of in- or out-of-place sorting.

In-place can save you a lot of memory-allocations and is therefore better suited for systems with limited memory. It has the cost that it destroys your original set of data, and therefore is not threadsafe.

So, how does this look in practice? Is functional programming unsuitable for sorting?

No… I’ve shown you a sorting algorithm. Also there are other alogrithms that work very well.

Is imperative programming always more performant here?

Kind of, probably. But I think, this is because its the way how current computers are built. If we had a CPU that were functional instead of imperativ, everything could look different then…

And does your code “find the smallest and the greatest item in an array of items”?

Its not for an array, its sorting a linked list. An no sorting algorithm finds lowest or highest item, the sort them. So the lowest is probably the first in the result while the highest is the last.

3 Likes

Is that also the case in Java and other languages, who run on a VM? I think these ones organize the concurrency, or not?

How high is the difference between in-place and out of place sorting, in terms of memory in practice?

So, there are functional sorting algorithms, which are less memory economical and still very useful?

How that? Is the difference such small? Is the benefit of thread save sorting so much worth?

Oh, that is very interesting. Can you show me some recommended links, which explain that a bit more in deep? How is a current CPU build for imperative computing? Is there something like a functional CPU? Can I use a RISC processor in such a way?

Probably sounds like we can not really rely on them to me.

So, is a sorting algorithm senseful in order to find the smallest and the greatest item in an array of items?

Is that also the case in Java and other languages, who run on a VM? I think these ones organize the concurrency, or not?

But if A is currently sorting the array in place and B has a reference to it and reads into it, it may read garbage. It may rely on order of items as it was cycles ago.

Even worse, B might write into the data while A is sorting…

How high is the difference between in-place and out of place sorting, in terms of memory in practice?

Depends on the algorithm used.

How that? Is the difference such small? Is the benefit of thread save sorting so much worth?

You haven’t gone through those problems, haven’t you? If you were and had to debug those, you weren’t asking…

Oh, that is very interesting. Can you show me some recommended links, which explain that a bit more in deep? How is a current CPU build for imperative computing? Is there something like a functional CPU? Can I use a RISC processor in such a way?

Its a pity, but I do not have such links. Its 100% theorycrafting between an ex-co-worker, his girlfriend and mine we did about a year ago when we commuted together…

So, is a sorting algorithm senseful in order to find the smallest and the greatest item in an array of items?

Ideal sorting has a runtime of O(n log n) but you can find min and max combined in a linear scan with 2 return values, so its O(n). Why should I sort then, it is less efficient…

1 Like

Ah, I see.

He seems to assume, its always high.

Nope, I have not. This friend seems also to assume, that these differences are not that hight.

Its simply a challange. He means, this sort of things is obiously more easy to implement in C++/Java as in Haskell.

Its simply a challange

But its foreign to reality. Sorting inplace just to check for min and max…

Its like swapping arms and legs of a human just to check his teeth…

Irreversibly changing content of the memory in a timewasting manor, that can be found by simply looking at each element once.

Just finding min and max of a list in Haskell:

minMaxSingle :: Ord a => [a] -> (a, a)
mimMaxSingle (x:xs) = foldl go (x, x) xs
  where
    go (min, max) e | e < min   = (e, max)
                    | e > max   = (min, e)
                    | otherwise = (min, max)

minMaxDouble :: Ord a => [a] -> (a, a)
minMaxDouble xs = (minimum xs, maximum xs)

Inplace sorting a ordered collection of objects in place needs mutability, mutability is not wanted in functional programming because it essentially destroys most of the abstractions of FP…

Wrong assumption… I’ve gone through debugging legacy code were data was mutated by one thread, while another consumed it assuming it were the sole accessor to the data. This lead to massive bugs in the software, were crashes were the less severe ones… It also produced massively corrupted data.

It took us about three man weeks (~120 working hours) to know why the crashes and data corruptions happen. Identifying all involved “data processing threads” took another week and repairing them all about 2…

1 Like

Thanks a lot for all these clear answers.

Would you say a GPU is such a thing?

Libraries for Nvidia’s CUDA cores and AMD stream processors are typically written in C++ and Fortran so draw your own conclusions. So functional abstractions are typically implemented on top of an imperative infrastructure/environment.

Reduceron was a research project into a “functional” computer.

1 Like

Well, because AMD and Nvidia decides so, doesnt mean this have to be the most sane choice.
F# and Haskell are capable to program GPU`s…

Thanks for the link :smiley:

It avoids memory overhead. Is it faster? Traditionally it has been faster but I read an article recently that old “discarded” sorting algorithms are making a comeback. Back in the day they were often ruled out because of use of too much memory. However in today’s world with an abundance of memory they are making a comeback and are in fact faster for a number of use cases. (This of course doesn’t take into account that a modern computer with 16GB of RAM hardly manages to run an IRC client clone without swapping and running the fans hot :D)

The only problem is my memory is not working and I can 't find the article which was pretty interesting…

2 Likes