Elixir VS C#

You may be interested to revisit .NET via F#, it has agents…

check out

https://blogs.msdn.microsoft.com/dsyme/2010/02/15/async-and-parallel-design-patterns-in-f-agents/

and there are various things talking about how to easily get genserver type functionality using F# and then on top of that there is orleans

1 Like

This is why I really want an OCaml back-end to Elixir, entirely possible, just needs to get done. :slight_smile:

For note, F# was based on OCaml, most code that works in F# works in OCaml almost unaltered (except for the accessing of .NET specific oddities). So if you like F#… :wink:

Also, can’t F#'s Agents crash everything by calling the wrong thing? That is a .NET limitation, it does not have the hard guarantees of something like the BEAM, regardless of the language or libraries used…

Two things, F# does not have modules like OCaml, that is one of the big things it is missing, it has most of the abilities, but not all, last I saw anyway. Also based on Active Patterns - F# | Microsoft Learn for Active Patterns, can’t you just do that with a single function call between the match and input? What does this gain other than magically coercing types (which OCaml abhors as it harms readability)?

But yes, I badly badly want some for of typed Elixir, and an OCaml backend to Elixir would be wonderful!

2 Likes

One benefit Elixir/Beam model over .Net is that each process has it own garbage collection and each process is isolated. If you want to get something similar in .Net you need to put application in containers.

I there is also Akka .Net Orleans and Akka Actors: A Comparison

Also you can use http://reactivex.io/ with .Net platform.

1 Like

Akka (and Akka.net, I’ve messed with both) are fantastically designed, I would use them in a heartbeat if I had to use the JVM (or .NET) system(s), but they still entirely lack the guarantees that the BEAM has. And yes, the per-actor GC is absolutely wonderful! Most of the time the GC is never touched, the process is spawned, does some work, then everything is freed all at once without needing to worry about expensive GC Sweeps that both the JVM and .NET absolute have to incur just due to their OOP’ishly internal design. ^.^

1 Like

First…Thanks for replying. Seriously. It’s been a long day. :sweat_smile:

Second, I quote these partial fragments specifically because my point in coming to Erlang (with Elixir as a surprising and wonderful added bonus) is that concurrency is not something “added on”. It is designed this way at its very core, with every facet oriented to it. From the GC and isolation that @mkunikow mentioned to immutability to supervisors/OTP and processes, it has evolved organically to handle nodes (whatever they may be) operating in parallel from its very inception.

And with ibGib, I needed a completely concurrent system. And unlike many here (and elsewhere in the FP world), I really :heart: OOP. I can still remember reading my first book on OOP about polymorphism, inheritance, data hiding, encapsulation and thinking “Wow, this is incredible” (I had only done some C, Basic, and assembly at the time.). Honestly I don’t get that feeling with FP alone (older now hah!). But Erlang/Elixir + OTP was exactly what I was looking for with ibgib. It was so natural to code the engine in it, and I feel blessed that I found this ecosystem (and community). But I digress…my point is that to me the BEAM isn’t about the “actor model” or FP or any other individual patterns or aspects built on top of something to be concurrent - it just is concurrent.

getting off soapbox :wink: Like I said, it’s been a long day!

F# certainly does have Modules. I am replying from my phone or I would look up links.

Active patterns seem like just a little more than syntactic sugar to me but I will confess my F# is rusty.

While I think being concurrent throughout is important, most environments are fine with concurrency being tackled on top. Sometimes it may force you to stick with one subset of the ecosystem, for example, different Actor or Concurrent processing libraries have their own schedulers which won’t play along, but that’s a minor detail.

What is really important to have to have throughout is fault-tolerance. If you have a single entity that does not play along, that’s going to be the weakest link. Similar happens to preemptiveness (is that a word?) and how it is used to achieve predictable latency. If a library you bring in isn’t preemptive, it may starve the rest of your code.

So what’s really exciting about the BEAM is that it was designed with those use cases in mind and I believe it is still the only runtime running widely in production that provides those guarantees from top to bottom.

I would also like to echo that there is a lot of interesting research and applications happening in .NET, on both C# and F#, and we have been exploring those for a while. Task.async/await naming came from .NET, Ecto borrows from LINQ and there are still other features I am envy about (I can’t recall all of them now but at least discriminated unions come to mind). There is a lot to learn from projects such as Orleans and Naiad.

Also, they have the best names. Whoever decided to call the active pattern syntax (|foo|) as “banana clips” is a genius.

4 Likes

Yes. That’s why I decided to not add active patterns to Elixir or something similar. It is more explicit to extract the value before and keep the idea of “purity” in matches/guards. Plus the common cases can be addressed with macros (and the upcoming defguard will make it a bit easier).

1 Like

Yes, I’ve been in that world for sure. Most recently, this was with trying to embed rxjs, bacon.js, and the like, throughout a javascript application (actually I started with Rx in C#). Also in C#, but the subset was our own homegrown frameworks across all layers: business layer with built-in, poor man’s BEAM-like distributed and asynchronous bus system, event sourcing, parallel async “microservices” (I used the term “autonomous services” because I started it before the term microservice became a buzzword) with multiple adapters starting with AppDomain (which is kind of like an Erlang process inside of an OS process), and more. I came to Erlang/Elixir after researching other paths such as Clojure, Go, and others.

Yes, and I see this as another side of the concurrency coin. When I say that “concurrency throughout” is important, I mean that in addition to “computational execution parallelism”, the concurrent code and execution of code marries with the rest of the architecture well. I’m probably just wording this poorly.

Perhaps one could argue that FP in general covers this largely with immutability, but I personally had a problem understanding how other FP languages handled state at the application level. I’m sure there are elegant ways to express it in other languages, but IMO OTP handles this beautifully with processes (and specifically GenServers) + Supervision trees. I would have to go into how ibGib is designed for why this is the case, but this is why I get so excited when talking about concurrency, parallelism, etc. in the BEAM! :laughing:

Absolutely! A wonderful way to put it. :thumbsup:

Yeah, I realized after posting that I didn’t espouse the benefits of .NET and F# enough in my reply - it was a long Monday with too much caffeine :coffee:. I apologize if I offended.

I certainly :heart: C#, F#, and the .NET community as whole. I’ve been in that world a long time and I appreciate everything about it! Since I started seriously programming with Delphi 5, C# was an eye opener for me. This is because Anders Hejlsberg was the lead architect in both (and now TypeScript), and going to C# was almost just like going to a new and improved version of Delphi with memory management (GC), namespaces, and more. Then generics came and I was blown away. Then LINQ and I was like :103: Wow! Consequently, I took to Ecto’s syntax like a duck to water. :ocean: Anyway, now I’m an old man reminiscing…I definitely need to continue to keep my eye on F#, now that I’ve been immersed more in the FP world.

I’m just really enjoying the luxuries of Elixir on the BEAM for ibGib, so I get carried away sometimes! :smile:

2 Likes

It does have modules yep, but it does not have Functors (unless that’s changed recently), which is basically a module with arguments that can generate new modules based on arguments.

It looks like it borrows from SQL to me? o.O

Hear hear, I am highly antithetical to the hiding that things like that pattern seem to be doing (I so very much hate auto-converting types, it should always be explicit).

Also, defguard? Ooo? :slight_smile:

I’m curious, what is special about C# generics? They seem extremely limiting compare to what I’m used to in other languages (C++, Scala, OCaml (Functors especially), even lacking features compared to Rusts’s limited form of Generics)?

1 Like

I didn’t say that F# has the same implementation of Modules as OCaml does.
I said F# has Modules. It used to be a pretty common question among people
new to F#–why were there both Modules and Namespaces–and the simple
answer was that Modules were preserved to maintain compatibility with OCaml
at first. Once the feature was in the language (crippled though it was
from an OCaml perspective) it couldn’t be removed. But they are
present–just not in the same form as they are in OCaml. So I think you
and I are talking about “present” in two different senses.

Perhaps you can enlighten me about why parameterized modules (and Functors)
are such a killer feature in Ocaml? I have to confess that like Haskell’s
Turing Complete Type System I never quite understood why it was such an
advantage to have. I know there were a few other folks who crossed the gap
from OCaml to F# who also missed parameterized modules but I could never
quite figure out what sort of code parameterized modules would enable one
to create and why their lack in F# was such a pain point for those folks
who’d used OCaml before.

I can definitely see the influence of Linq on Ecto.

I have to confess I only worked with Active Patterns just enough to
understand them. Like some features in other languages it seemed to me like
a solution in search of a problem. And as I get a few more miles under my
tires I get a greater appreciation for that bit of wisdom that explicit is
almost always better than implicit.

The effect of generics on me is a personal remark. I was not born a programmer as my brothers were (since they were 5 or so). I dabbled in assembly and C but didn’t start going full at it until I was using Delphi. In Delphi 5.0 at the time (early 2000s), there weren’t generics yet. They may have existed in other languages. So in my building of my multi-threaded transcription program (I was typing neurologic transcription at the time and thus developing/dogfooding it for myself), I remember having to explicitly write out many list classes, events, etc., that were to enable strongly typing things. This carried over in the earliest version of C#. When generics were introduced, they significantly improved the expressive power of the language and allowed me to cull my code greatly - especially in combination with reflection (runtime introspective programming).

As for compared to other languages, I’ve looked a little into the OCaml functors here, and I have to say that it seems to be a strange (broad?) comparison when comparing generics in an OOP language like C# to Functors in an FP language such as OCaml. From what I can grok there, it appears that the modules are like interfaces and that the functors are used to “build up” complex dynamic functions, much like a compound currying mechanism. This behavior in C# would like be implemented using base classes which implement interfaces. Then, using generics, you could pass around strongly-typed references via the interface/class. You can also declare constraints on the generics when doing this as well.

So for the simpler Increment example, you can actually pass around generic anonymous functions, such as Func<int, int>, but to represent a function that takes a single int argument and returns an int, such as (oh goodness, from C# memory…it’s been a couple years now!): var inc = x => return x + 1;. To do a closure around an anonymous lambda function I believe would just be to set a variable before the declaration:

var y = 2;
var inc2 = x => return y + 1;

But my memory is hazy on the scoping rules for them. It would be unwieldy to pass around more curry-like functions, such as a Func<Func<int, int>, int> (but I have done it).

Also, from reading through that page, it mentions the required explicit module type attribution. My gut feeling is that this has to do with covariance and contravariance? (oh we’re digging deep for vocab now). Generics didn’t initially have this capability, but at some point it was introduced and allowed for even more powerful type inference in the compiler. This makes me want to search for an example in the old ibgib codebase, because I’m sure I used it. I had built the entire codebase on a single interface (going against standard C# naming conventions unfortunately, but I had to do it): Task<ib> gib(ib gib). As I’ve learned more about FP, it turns out I was just about trying to shove FP into C# and I didn’t realize it at the time! :smile:

EDIT: Looking at the mathematical functors on the all-knowing wikipedia, it does indeed look like that is why co/contravariance was introduced to generics in C#. I always would get confused on which one is which, but the practical side was that you could define generic types using the keywords in and out, depending on how a type was being used.

Hehe, this is a fun subject, but I will try to keep it succinct. ^.^

First of all, OCaml’s modules are what enable full Higher Polymorphic Types. Like Haskell’s Higher Kinded Types (which allow typeclasses and more) HPT’s solve the same problems but in a different way. The parameterized and first-class modules in OCaml are what allow passing strong but ‘unknown’ types ‘through’ functions, which immediately makes obvious that you can emulate HKT’s that way, so it has that ability, BUT instead of requiring whole program type scanning, like Haskell does (and is the major source of its slow compiling), the HPT’s in OCaml only need to know about what is at the point where it is used, thus enabling significantly faster compiling for the same features. With parameterized modules, even ignoring the HPT’ness of their immense capabilities, you can emulate Haskell’s typeclasses. The usual example is to reimplement the Haskell’y show typeclass, so let me do that here, first lets define the ‘typeclass’ type:

module type Show = sig
  type t
  val show : t -> string
end

This is just a type of a module, any module that has a type named t (which is the common ‘module-level’ type name by standard) and has a function in it called show that takes the t type of the module and returns a string.

Now let’s define a function that uses this type to get the string of whatever:

let show ( S : Show ) x = S.show x

Now immediately you’d be thinking (if you know Haskell), “Well why not just use a HKT here and pass the types based on ‘x’ directly?!?”, well that is because OCaml is made to be fast, both in compiling and execution, so we need to ‘decorate’ the type. The { S : Show } is 'destructuring (just like in elixir matching) the passed in First-Class module to give it a name that we can use, just like it was a full module. We then call the show function that is defined in that module, passing to it x, which will not compile if x is a different type from S.t. So we could do something like this:

print_endline ("Show an int: " ^ show (module struct type t = int; let show x = string_of_int x end : Show) 5)

Here I pass in a module to show, that I define inline (yes you can even define modules inline), that can handle the int of 5 that I pass in, however this is wordy, so as per the common OCaml examples let’s define a few useful global modules that fulfill Show:

module Show_int : Show = struct
  type t = int
  let show = string_of_int
end

module Show_float : Show = struct
  type t = float
  let show = string_of_float
end

module Show_list ( S : Show ) : Show = struct
  type t = S.t list
  let show = string_of_list S.show
end

Now we can use it like:

print_endline (" Show an int: " ^ show Show_int 5);
print_endline (" Show a float : " ^ show Show_float 1.5);
print_endline (" Show a list of ints : " ^ show Show_list(Show_int) [1; 2; 3]);

Now you can call show on any type that has an appropriate module defined, and you can pass the module down however deep through many functions. As you can see, Show_list is a function here, since it will in turn take a module of whatever can display its internal type. This gives you the full power of typeclasses in Haskell, but without the exponential compilation cost (it is O(1) here!), however it does mean having to pass a handler around, what is usually called in OCaml circles as a ‘Witness module’ through the functions. This is where people get that OCaml’s modules can do what typeclasses can (and more) but it is a bit more wordy as you have to carry the witness around.

However, a soon-coming OCaml version has an accepted feature called ‘Implicit Modules’, let me demonstrate:

implicit module Show_int = struct
  type t = int
  let show = string_of_int
end

implicit module Show_float = struct
  type t = float
  let show = string_of_float
end

implicit module Show_list { S : Show } = struct
  type t = S . t list
  let show = string_of_list S.show
end

So, just added the implicit keyword was all (and I do not need to force type them as this will work with any implicit that fulfills the type), in addition to making the argument on the Show_list function take an implicit, an implicit argument is defined with {} instead of (). To use it I just need to redefine my `show function as:

let show { S : Show } x = S.show x

Now S passed in module is an implicit. I can now use it like:

print_endline (" Show an int: " ^ show 5);
print_endline (" Show a float : " ^ show 1.5);
print_endline (" Show a list of ints : " ^ show [1; 2; 3]);

The modules are looked up by the compiler first by the type the function wants (Show in this case) and any implicit module that fulfills this is eligible, then it tests the types in the module so that they are compatible, and it fills in at the call site which module is able to be used.

However, this would be a bit magical if done anywhere, and more costly, so to make a module eligible for this it has to be opened implicitly, which you normally open a module via open ModuleName, to add an implicit module to be able to be used for resolution this is done via open implicit ModuleName, and doing this could open a whole set of modules as well if ModuleName module included all the above defined implicit Show modules. This needs to be opened in the scope of where show will be called, thus at the caller site. If you have implicit functions that call implicit functions and so forth, each implicit function needs to define its implicit parameter, thus only the original base call site require the implicit loaded (and since the call site knows what the type is then that is trivial).

The implicit modules where modeled on Scala’s implicit classes. It allows using Witness modules via a substantially more simple syntax, and functions that use them only define the signature of a module they want to accept, nothing more.

But with all these features, OCaml modules are able to emulate Haskell typeclasses, Haskell HKT’s, and even more, all without compromising the ability for the compiler to optimize or compromising compile time at all. This is why OCaml is one of the fastest compiling native languages out even while doing optimizations that make it rival C++.

F#, for comparison, require .NET, and uses a LOT of dynamic dispatch, which will always make it slower than OCaml itself as the OCaml compiler gets rid of every dynamic dispatch that it can. To do dynamic dispatch in OCaml actually takes more work than just doing things correctly, consequently such code in OCaml is never used unless absolutely necessary. The design of the language encourages doing things the right way. :slight_smile:

Also, don’t you love that even using ‘implicit’ things in OCaml still requires an explicit call to state that you are doing so? :wink:

2 Likes

Ah true, I could see that. I started programming with assembler so the concept of typing was… different there. ^.^

Eh, kind of, you can see my post above for more details, but basically a Functor makes to static dispatch. To do, as you said, in C# using base classes, that means you will have dynamic dispatch. OCaml can resolve at compile time the types of everything and thus dynamic dispatching is not needed and can in fact be tightly optimized, this is not something that .NET can do because it only optimizes out one function and even then only for inlining purposes, it cannot know what a type will be of something as it is called. For example .NET a List that is defined to, say, int for its internal type, it cannot optimize for this as internally it is just storing boxed types and working on them (in reality .NET does optimize ‘somewhat’ for primitives, mostly for memory but not necessarily speed except to just remove boxing costs, and it does not for compound types at all, where OCaml does in all cases).

It does but in a soon-coming OCaml version this has an enhancement to fix that, as detailed in my prior post. :slight_smile:

And nope, nothing to do with covariance or contravariance, a Functor can handle those both, as well as other things even (want to make sure an ‘int’ type is only between 3 and 10, you could do that!).

1 Like

First off, great information on functors! Definitely a nice additional example to the page I referenced. :smile:

Stepping back, I would say now we’re talking about speed optimizations and I was referring to the evolution of generics per the “extremely limiting” aspect. :thinking:

With regards to dynamic vs static dispatch, now we’re getting into the oddness of the comparison in the first place. The term type has two different meanings here when we refer to what “types” are known at compile time, yes? In OCaml, types are more restricted to data structures, whereas in any OOP language, types refer to both the primitive types that you mention, as well as the complex constructs that gives us the “Object” in OOP. And in this sense, C# does indeed know all of the types and interfaces at compile time as per the constraints that I mentioned.

Whether one is faster than the other, I am totally not an optimization expert…so I will :bow: to your knowledge on that one! :smile:

And btw, this has definitely gotten me to look more closely at syntax and OCaml constructs, so now I need to check that Bucklescript thread again, planting more seeds for a better front end for ibGib! :laughing:

I was comparing what Generics could do compared to OCaml modules (I’m not touching C++ templates, if you think my above OCaml post was long, you’ve seen nothing! ;-)), I was not elaborating on everything else modules could do, just based on ‘just’ generics and what they could do, they are horribly inefficiently done. ^.^

A class is a type sure, but there is a whole set of classes can that fulfill a given type, but accessing those class instances require dynamic dispatch, that is one of the main failings of heavy OOP systems in performance constrained code (personal experience especially, plenty of docs on google about this like What's wrong? as an initial find, and I quote Virtuals don't cost much, but if you call them a lot it can add up. aka - death by a thousand papercuts when it is entirely unnecessary).

Lol, I really want an OCaml->Elixir backend written. ^.^

OCaml is in a fuzzy place for me. For ‘real work’ that will be hosting a long lived server that needs uptime, I’ll use Erlang/Elixir. For ‘real work’ that is more usual programs I use either C++ or (much more often recently) Rust. OCaml is more ‘fun’ for me, I enjoy programming in it, and it is certainly doable with real-work, I just never end up choosing it for real-work, but I only have very few good reasons (maybe only one) as to why I would not choose it (it still has a GC for example), where I have a LOT more reasons to not choose anything on the JVM or .NET systems for real-work (ew GC, ew OOP, etc…).

Rust I really have to espouse on again, it really is C++ done right, strongly typed, no GC, does not even need a GC as it uses proper semantics for handling all resources (not just memory), basically C++'s RAII baked in to the language with brilliant borrowing semantics. Still not as functional as I would like, and I do fight with it a lot more (mostly with its macros, I have borrowing down solid), where OCaml, if I fight with it, it always shows me how I was wrong in the end, and I come away more enlightened. ^.^

(Fighting with Rust on the other hand just leaves us both battered and me bugged at the lack of any form of higher types in it or anything like a C++ template system…)

1 Like