That’s not really dependency injection though, that’s just passing in an implementation, nothing is implicit there, it’s just like using a module as a record and you could indeed have implemented that as a record.
Yes, Haskell has an extension landscape that allows you to use new features if you want to. This is what allows Haskell to be simultaneously an evolving language but still solve modern problems, as opposed to a language that has had several important features coming “soon”, for what, 5+ years without an end in sight.
I think OCaml is a fantastic language, but I consider it a fact that they should’ve pushed features more like Haskell. We know empirically that what they are doing is not working.
I think what they are doing works very well. They only accept features if it is shown they are not replicateable (efficiently) via existing methods and that it gives a noticeable boost to the coders productivity while making very very sure not to do it wrong (they do not like breaking backwards compat at the language level), it’s a good method for a language that is expected to be in use for decades. C++ follow’s the same route. Haskell (GHC specifically) on the other hand adds a lot of ‘features’ and extensions and such that are slow code, slow programmer productivity (ugh strings…), significantly slow compilation time, etc…
OCaml’s type system is based on HPT’s, unlike Haskell’s HKT’s and typeclasses, and HPT’s can do everything that and solves the problems that HKT’s and typeclasses have (though slightly more verbose, barely, however that should be resolved in 99% of cases with implicit module support). And I’ve not even touched on PPX’s or anything of the sort yet.
Is it possible to build a type-level library like servant in OCaml? Fully type-safe web service specifications as code that guarantee all the parameter types and the response type are implemented, without any macros or code generation? Type-level operators to build a DSL for combining multiple service specifications into one over-all application type?
It must be that I am working on the wrong kind of systems, or maybe the wrong kind of statically typed languages, or both, but in my experience it’s always been the opposite. Types seem often get into the way of refactoring, and I feel I’m coding for the computer/compiler instead of for myself. I love the optional system that Elixir brings, because I can enable the safety net when I want it and ignore it when I don’t. For hard stuff I’ll keep sticking with Smalltalk style typing (a couple of the most complex and most successful/bug free systems I worked on were in Smalltalk). Frankly, it seems to be that not a lot of large/interesting systems are written in statically typed languages (on the level of a system like Squeak or Pharo, which basically is everything plus the kitchensink).
YMMV, but unless type inference engines get mind reading capabilities, I’ve decided to steer away from statically typed languages; I might pick one up now and then as part of my language-a-year learning attempts (Haskell was the last), but so far I haven’t encountered any must-haves.
Having said that - JS must die :-). Any alternative for the OS-that-is-the-browser is welcome.
I am curious What type system have you tried so far ?
If you are using OO type systems I feel your pain
Have you got chance to try any ML language (F#, Haskell ,OCaml) ?
I don’t think so that will JS die any time soon … There was Adobe flash, Microsoft Silverlight -> dead already. There is Google Dart but only Chrome browser support it native
Yeah it’s quite easy to make a DSEL kind of thing for that using GADT’s.
If you are just refactoring then the types should all remain the same, so if you ever get yelled at then you have a definite bug, thus seems quite important…
Other than ‘most’ things, including the OS you are running on (C is still mostly typed with a smattering of type erasing void pointers sadly, plus C++).
WebAssembly, it hit 1.0 in the spec recently and it’s implemented in all evergreen browsers, you can use it now.
OOP class hierarchies are garbage, they make everything so much more painful…
Since I started professional software development in 1990? Pretty much everything under the sun ;-).
I have looked extensively at Haskell, but never liked it. Looking at OCaml and friends, it’s probably the same. It feels too artificial, too restricted, too academic to be useful for the sort of systems I’m typically involved with (the last couple of decades that was mostly online stuff around ecommerce, classifieds, and now a bit of ITOps stuff).
Interesting. Have you ever used the OO system? I don’t mean the atrocities that people label as “OO” ;-). I was quite happy building some very complex systems in Smalltalk…
Not too much C++ in my Linux kernel last time I checked. Dunno about the closed source ones.
C is an interesting entry in the landscape of typing. I always have the feeling that it’s not really strongly typed, given that you can make the compiler do very dangerous stuff (and I love it for that - it’s the only language I’ve stuck with for close to 35 years now ;-)). It’s not dynamically typed, that’s for sure, given that all type info is erased, but I also don’t feel it’s a strongly statically typed language. Which must make it a weakly typed language, which I think is the more proper classification.
(and that’s after all the enhancements in the language. I learned straight K&R C as that was the only thing on the block - that one for sure played fast and loose with types ;-))
C++? Must die even more than JS ;-). With C++, how the language works seems to largely be dependent on how you use it, given that it incorporates everything from straight C (weakly typed) to stuff that does indeed retain run-time type information. Maybe it’s flexibility makes it such a popular choice for building large systems
I’d love to see how you find ocaml too restrictive or artificial, I find it nothing of the sort, the types map down to assembly quite well and all.
Linux doesn’t like C++ because it has broken OOP syntax’s in it (which I never ever use, composition and tagging is always superior). Plus they like it to map to assembly as close as possible so something that is super-low-level like a kernel is very obvious to think about.
C is just assembly with a better syntax really.
C++ is VERY open, it is one of those languages that can do anything and do it crazy-more-efficiently than any other language, until a language gets close enough to replace it (Rust is the closest in my opinion thus far, but it is still not anywhere near there yet), it will not be replaced, nor should it.
It may be backwards compatible with C, but that just allows it to reuse more code, if someone writes C-style code in C++ then they should be slapped, there are so many ways to handle typing better, memory handling, etc…, all with no overhead, that if someone doesn’t use them then they can only be malicious.
Facebook has rewritten so far 50% of messenger code to ResonML -> more JS Ocaml syntax
Somebody posted this nice video
(Misread initially ^.^)
I recall that ClojureScript Om benchmarks were typically faster than generic JS React and that the imposed immutable values where “blamed” for the improved performance (though I think Google’s Closure compiler may have helped a bit). In ReasonML there may be the odd
ref here and there but other than that there should be the usual benefits (and drawbacks) of immutability.
For a better performance the engine would have to diff actual display state rather than the VDOM.
Wrong place. ^.^
Facebook is a terrible company and once write a PHP compiler to prop up their legacy code. I really don’t trust their judgement that much
I’ve extensively used both (and machine language - these were the days lol). I think it’s too easy to swipe C off the table with that remark. It’s sufficiently higher level and sufficiently “levelable” (in that you can build quite powerful high level constructs with the language) to be distinct. And yes, that includes macro assemblers. And it’s machine independent, a huge differentiator from assembly.
Eh it has helpers, but with a good set of assembly macro’s then even assembly starts looking a lot like C. I don’t think I’m being dismissive of it, it really is essentially assembly with a better syntax, it really does not have that many other high level concepts in it other than
if and function calls and so forth (all of which are trivially emulateable in assembly with macro’s). I.E. I see no reason to choose assembly, I’d reach for C instead, C has replaced assembly, and if you need to access something CPU specific or so then you can embed assembly in it anyway.
Also, a good assembly is fairly machine independent as well, take LLVM assembly for example.
Of course I have to say here that that is true of any programming language :-). Still, I feel that working in C is qualitatively different from working in assembler; ymmv, etcetera.
I think this is as good a place as any to ask: What is it people find so restrictive about statically typed languages in the ML family? If your types made sense in a dynamically typed language they would make sense in the statically typed one. If you need something to possibly be different types look at the type abstractions of the language, make it an enum/sum type, etc.
What are the restrictions people run into?
Edit: To clarify, I am asking which restrictions people object to and why, not just factually what the restriction is (types that unify).
I often wish I worked in a language where I could start dynamic and end statically. There are a number of languages with optional type systems, but they aren’t the ones that I use. I can imagine writing the core of an algorithm and then gradually tightening the conditions as it comes into focus. More than a tool, this would be a practice: progressive type elaboration. To get there, we probably need as much control of our type system, minute to minute as we program, as we have of our tests.