Haskell

I’d say that using functional programming to program GPUs is extremely logical: A GLSL shader expects as input the (x, y) coordinate pair (as a fraction between (0.0, 0.0) and (1.0, 1.0) which correspond to the screen’s top left and bottom right corners), and is expected to return a color.

A GPU has no other input/output mechanisms than that, so using a functional approach here makes a lot of sense.
Also, currently there are really nice OpenCL/CUDA bindings that e.g. let Haskell-code compile to stuff that runs on your GPU or other special hardware.

And there also exist very exciting tools to go the other way around: Automatically harness the power of a GPU existing in the computer to do high-performance array computations using libraries like Accelerate.


As per mutability/immutability in Haskell: Haskell contains mutable datatypes (most commonly used are the mutable array types), which you can use by exposing their mutability as a monad. This means that you make the order in which mutations happen explicit, and therefore allows you to work with this mutable datatype. By ‘packing’ this mutable datatype into an immutable datastructure at the end of the complex calculation, the rest of our program that calls into this function does not need to know (nor can it) that we were internally working on a mutable array.

Implementing an in-place quicksort this way is possible. Another example I personally wrote for a course at the university, was an algorithm to compare the Matrix Chain algorithm: The order to chain a number of matrix multiplications together that results in the least amount of work.

{-#OPTIONS_GHC -O2 -optc-O3 #-}

{-

Author: Wiebe-Marten Wijnja
Date: 2017-03-11
AADS, exercise: STAR

This problem is isomorph with the Matrix Chain algorithm,
except that the conditions in which lower cases can be combined in the recursive case
is a little bit simpler.

This algorithm is the (not so) straightforward Haskell-translation of the Matrix-Chain-Order algorithm
explained on page 375 of the Introduction to Algorithms book.
  (For the exercise we only need to return the _count_, and not the actual parenthezation order.
  Therefore, the matrix `s` is not used.)

This is a dynamic programming algorithm that fills the main diagonal of a matrix with zeroes
(representing the base case of matrix multiplication chaining, where you only have a single matrix),
and then computes the best way to chain matrix multiplications together
by taking the minimum possibility for all ways of writing brackets at this level.
This is repeated until the whole upper triangle of the matrix is filled.

Then, the result of the top right corner of the matrix is returned, as this is the final result to the whole problem.

This algorithm runs in O(n³), and takes O(n²) space to store the memoization matrix.

This implementation uses the ST monad, specifically the STUArray (a strict State-Tranformer Unboxed Array)
This means that the imperative-like pseudocode can be implemented in a near-1:1 fashion in Haskell.
However, since it is fully typed, we are a lot more sure that our program is correct and does not crash at runtime. Also, the compiler is able to optimize this program rigorously, resulting in a running time that is comparable with that of C++.

-}
import Debug.Trace

import qualified Data.Array.Unboxed as Array
import Data.Array.Unboxed ((!), UArray)
import qualified Data.ByteString as BS
import qualified Data.ByteString.Char8 as BS.Char8

import Data.Foldable
import Data.List
import Control.Monad
import Control.Monad.ST
import Data.Array.ST


main :: IO ()
main = do
  line <- BS.getLine
  let [n_problems, enlist_cost] = parseInts line
  items <- replicateM n_problems readSingleInt
  putStrLn $ show (matrixChainCount enlist_cost items)
  return ()

readSingleInt :: IO Int
readSingleInt = do
  line <- BS.getLine
  let [int] = parseInts line
  return int

parseInts :: BS.ByteString -> [Int]
parseInts bstring = map parseInt $ BS.Char8.words bstring
  where
    parseInt str = case BS.Char8.readInt str of
      Nothing -> error "Missing int! "
      Just (int, _) -> fromIntegral int

-- Extracts the final minimal scalar multiplications count from the created memoization array
-- Given a list of dimensions of consecutive matrices in a chained multiplication.
matrixChainCount :: Int -> [Int] -> Int
matrixChainCount _ [] = 0
matrixChainCount enlist_cost itemslist = (matrixChainCountArray enlist_cost items n) ! (1,n)
  where
    n = length itemslist
    items = Array.listArray (1, n) (sort itemslist)

-- ts x = traceShow x x

-- builds the matrix chain count array, using an STUArray
-- to be able to work on bare, unboxed, integers.
matrixChainCountArray :: Int -> UArray Int Int -> Int -> UArray (Int, Int) Int
matrixChainCountArray edge_cost items n = runSTUArray $ do
  memo <- newArray_ ((1,1),(n,n)) :: ST s (STUArray s (Int, Int) Int)
  for_ [1..n] $ \i -> do
    writeArray memo (i,i) (items ! i)
  for_ [2..n] $ \l -> do
    for_ [1..(n-l+1)] $ \i -> do
      let j = i + l - 1
      writeArray memo (i,j) (maxBound :: Int)
      for_ [i..j-1] $ \k -> do
        curval <- readArray memo (i,j)
        mik <- (readArray memo (i,k))
        mkj <- (readArray memo (k+1,j))
        let possibility = (max mik mkj) + edge_cost --mik + mkj + (dims ! (i - 1) * (dims ! k) * (dims ! j))
        when (possibility < curval)
          (writeArray memo (i,j) possibility)
  return $ memo

I mostly would like you to compare the matrixChainCountArray function, where the magic happens, to the pseudocode on Wikipedia’s page about this problem. It is a 1:1 translation :grin:.

1 Like

As for the discussion about finding the minimum and maximum of an array:

  • Finding the minimum/maximum/median of an unsorted sequence of items takes O(n).
  • Sorting an unsorted sequence takes O(n log n).
  • Finding the minimum/maximum/median of a sorted sequence of items takes O(1).

Now then, it makes only sense to sort the sequence if we expect to frequently re-perform these operations on the list (rather than storing the computed values for later). An example would be if we have an unsorted list that we sort first, then insert some more items at their proper locations to keep the sequence sorted (which could be done very quickly when using e.g. a binary search tree, whose construction is isomorph with sorting) and finally (or possibly multiple times, in an online algorithm) asking for the minimum/maximum.

2 Likes

This is only true for “sequences” which provide random access with O(1)! Some sequences do only provide O(n) random access and thus, depending on the sort order, finding min or max will also be O(n).

Also, you can think of sorted trees, which , when self-balanced, will give you something inbetween.

2 Likes
1 Like

How do you decide whether a programming language is worth using or not? By necessity, such decisions are usually based on assessments that can be made relatively quickly: the ease of using the language, how productive you feel in the first week, and so on. Unfortunately, this tells us very little about the costs involved in continuing to maintain a project past that initial phase. And in reality, the vast majority of time spent on most projects is spent in those later phases. I’m (Michael Snoyman) going to claim, based on my own experience and analysis of language features, that functional programming in general, and Haskell in particular, are well suited for improving this long tail of projects. We need languages and programming techniques that allow broad codebase refactorings, significant requirements changes, improving performance in hotspots of the code, and reduced debug time. I believe Haskell checks these boxes.

EVENT: #FnConf18
SPEAKER: Michael Snoyman

When I joined NoRedInk in 2013, we had a typical Ruby on Rails web application. In 2015 we introduced Elm, a pure functional programming language for building Web UIs, and it spread like wildfire to become our primary tool for front-end programming. In 2019 we have over 300,000 lines of Elm code powering the user interface our millions of users rely on.

The positive experience we had with Elm led us to seek out a pure functional language to use on the back-end, and in 2017 we introduced Haskell to our stack. This talk discusses the reasons we tried these technologies, what we hoped to get out of them compared to what we got, what went well and what didn’t, and the strategies we used to adopt them incrementally inside a mission-critical code base.

From presentation

Instantly spin-up a GraphQL API server by pointing PostGraphile at your existing PostgreSQL database