ECMAScript book

FYI

O’Reilly: Celebrate 21 Years of JavaScript - For one week only, SAVE 50% on all JavaScript ebooks and video training. Use discount code WKJVSR - Deal expires September 26, 2016 at 5AM PT.

Examples:

etc.

2 Likes

FYI
Kyle Simpson is working on Functional-Light JavaScript (Indiegogo).

1 Like

Thank you!

How it feels to learn Javascript in 2016

:grinning:

via

4 Likes

I laughed so hard, my stomach hurts hahaha

1 Like

Hehe, I saw that on reddit, and from personal experience the past 4 months it is so, so very true. It is a horrible language and a horrible ecosystem. ^.^

1 Like

Given the limitations I’d say it could have being much worse :slight_smile: What if Elixir had to target 10 BEAM implementations by poorly cooperating vendors and had to support all the versions of all the implementations going back like 6 years :slight_smile:

2 Likes

Hehe, true true, but that does just prove my point on how the whole ecosystem is horrible. :wink:

/me holds out hope for webassembly… then compiling BEAM to it…

3 Likes

I would put more blame on the ecosystem. It seems we have moved from “JavaScript is most despised because it isn’t SOME OTHER LANGUAGE” to a situation where everybody feels entitled to “re-interpret” JavaScript to some sort of opinionated variation of the original they personally are more comfortable and happy with.

ECMAScript is quite serviceable when it is used in a manner that plays to it’s strengths but ultimately that requires a competent knowledge of the language specifications (which often reveals the intent behind the design) but it is no universal language (there is no such thing).

… the (WebAPI) situation (which JavaScript got blamed for) in which jQuery became the unifying platform after 2006. jQuery worked so well (too well, it could be argued) that many users never bothered to go into JavaScript very deeply.

I have to wonder if this situation continues to some degree with the (opinionated) libraries/frameworks du jour - i.e. some people learn just enough JS to wield their libraries of choice and expect to assimilate JS by osmosis via the various library tutorials and then move on to write libraries with their particular JS-dialect. It seems that this tool-focus is often cannibalizing time that may be more productively spent getting more familiar with JavaScript itself.

I mean which programming community would not realize that overuse of in-line function expressions whether in the callback hell of the triangle of doom or in seemingly endless promise chains are just a reincarnation of the mind bending Arrow Anti Pattern.

The other issue is that while JS looks like a “C-like” language (curly braces and all that) JS developers really need to be encouraged to explore the other side of the JS family tree - Scheme and the like.

Ever heard of Python 3?

If the adoption of DevOps continues to become more widespread and Python continues to be adopted as a shell script alternative - I have to wonder whether the Python community is in for a similar sort of ride. Furthermore is that type of “ride” correlated to a large number of the community members “learning programming on the job”? Maybe some people don’t realize that “learning to program” goes beyond “learning a programming language”.

3 Likes

True, though I have two big issues with javascript:

  1. Too many incompatible interpretations! I.E., Chrome’s, Firefox’s, IE’s, Edge’s, even nowadays are still not identical. It makes me yearn for having a single compiler that is the version that I control in the build system that compiles to binary. I keep running into things that work perfectly in, say, Chrome, but break in Firefox, or vice-versa.
  2. Untyped. Things like ‘flow’ help, but say I want a true 64-bit integer with proper overflow as the cpu interprets it, nope. Want to work with binary well then I gotta work with a TypedArray or ArrayBuffer or so, oh wait, those do not work on some of the dialects so they are out! Etc… etc… >.>

All the languages on top of it do not fix either of those underlying issues. asm.js was looking promising for a while, but its dead. WebAssembly is… well really slow moving, I do not even know what its status is right now, or if it has died too… Even then not everything will support it properly. >.>

Chrome’s native-whatever-it-was-called looked promising for a while, but its dead now too. All of these show that Javascript is broken, but there is no way to fix it because there are Too Many Vendors and will as such always be broken.

1 Like

Heh, I like JS and have fun messing around with it bit only in the context of Node. Can’t stand the frontend.

1 Like

It also unfortunately requires understanding how the various JS Engines are implemented and the problem is that the design of the underlying Engines is encouraging programming style that is the opposite of the “intent behind the design” of the language itself.

2 Likes

I guess you are referring to this - frankly I’m a bit shocked at Babel’s low percentage when compared to Chrome/Firefox/Edge given that Babel seems to be the de facto standard for JSX processing. However when referring to something like C++ compilers, multi-compiler/multi-platform code has always been inundated with pre-processor directives with conditional code to accommodate non-conformant compilers and varying target platforms. Even if you chose one single compiler typically a separate executable is needed for each divergent platform targeted. I guess it’s true: a browser is a platform.

“Untyped” is a bit harsh. Now I agree that I prefer the safety net of strong static typing but I’m also not blind to the fact that static typing can become a straitjacket that can introduce extensive up front and often verbose ceremony via the definitions and references to the necessary interfaces/type classes.

There is just something appealing about the relative simplicity of duck typing when doing function composition while letting the supporting types “evolve as needed” - I don’t think that can be (easily) matched by a compile-time, statically typed language (yet). For example:

function composeMyFunctions() {
  /*
     stage1 :: {a, ...rest} -> {a, d, ...rest}
     stage2 :: {b,c,d, ...rest} -> {b,c,d,e, ...rest}
     ??? is {a, d, ..rest} sufficient for {b,c,d, ...rest} ???
     if we start with {a, b, c, ...rest} then YES otherwise NO
   */
  return compose(stage2, stage1);
} // end composeMyFunctions

/*  http://sebmarkbage.github.io/ecmascript-rest-spread/

    https://babeljs.io/docs/plugins/transform-object-rest-spread/
    http://babeljs.io/docs/plugins/transform-es2015-destructuring/ 
*/
/*  Needed/guaranteed:  {a} -> {a,d}
    Actual:             {a,b,c} -> {b,c,a,d}
    In general:         {a, ...rest} -> {a, d, ...rest} 
*/
function stage1({a, ...others}) {

  return Object.assign({}, others, {
      a: a,
      d: "abcd".repeat(a),
    });
} // end stage1

/*  Needed/guaranteed:  {b,c,d} -> {b,c,d,e}
    Actual:             {b,c,a,d} -> {a,b,c,d,e}
    In general:         {b,c,d, ...rest} -> {b,c,d,e, ...rest} 
*/
function stage2({b, c, d, ...others}) {

  return Object.assign({}, others, {
    b: b,
    c: c,
    d: d,
    e: Math.floor(d.length/b) + c
  });
} // end stage2

// f.g = f(g(x)) - i.e. first function to execute is the last argument 
function compose(...fns) {
  if(fns.length < 2) {
    if(fns.length < 1) {
      return function id (a) { return a; };
    } else {
      return fns[0];
    }
  }

  return fns.reverse().reduce(composeOnPrev);
  // ---

  function composeOnPrev(g, f, _currentIndex, _array) {
    return f_compose_g;
    // ---

    function f_compose_g(x) {
      return f(g(x));
    } 
  } // end composeOnPrev 
} // end compose

Are you referring to dartium? It seems “they” are still pushing it: Lars Bak & Kasper Lund: Want to be a Better Programmer? (GOTO Chicago 2016).

[quote=“andre1sk, post:33, topic:1175”]the problem is that the design of the underlying Engines is encouraging programming style that is the opposite of the “intent behind the design” of the language itself.
[/quote]
Would you happen to know any sources that elaborate on how that manifests itself (i.e. the resulting coding style) - I’m just curious.

1 Like

V8 and other engines are pretty much optimised for “classic” (minus inheritance:) ) OOP style code vs Prototype-based OO. There are a lot of good talks on V8 internals by Vyacheslav Egorov My Talks

2 Likes

Is it good ? Is it relevant with recent version of Elixir and Phoenix?

1 Like

Even ES5 is not very equal across the browsers that I have to support, it is painful.

Used to be a huge issue with C++ back in the day, but nowadays if there is a C++ programmer that cannot compile their application (not talking about drivers or system interfaces) for every platform out then they are using the wrong standard libraries.[quote=“peerreynders, post:34, topic:1175”]
“Untyped” is a bit harsh.
[/quote]

True, I did mean lack of static typing.

I have never experienced that, ever. I would pick OCaml over over Erlang in a heartbeat if not for the VM. I would pick C++ over Python even for small shell scripts and I indeed often do, I program in C++ with heavy typing via a set of libraries that enforce it, though I would pick Python over C++ if it is truly just a quick 1-minute job. Which brings this to Javascript. Javascript being untyped is fine for a quick 1-minute job, like setting something in some div somewhere, but trying to build these big apps in it now that everyone seems to be doing, the lack of static typing makes an entire class of bugs happen, of which I still hold firm due to my experiences over the past 20-30 years that 90% of would just categorically not be possible with a proper static typing system.

That is easily possible in static typed languages. I’m not sure how your example shows it to be difficult to do so?

No, not that, actually I’d not heard of that (thanks for the link! ^.^), it was something native… let me google, ah it was Google NaCl. I’d made a few things in it in the past, wonderful idea, I loved it! Sadly they’ve been removing its support the past year or so… :frowning:

2 Likes

I wasn’t aware of that but I guess I shouldn’t be surprised given some of the features that were added in ES6 to appease the classic/class-based OO fundamentalists.

Interesting.

To me it’s inevitable that at some point in time in the life cycle of a composed function pipeline the data payload/result/context will undergo some kind of change/addition that may be of interest to a minority of non-adjacent functions but is irrelevant to the majority functions participating in the pipeline. Typically in your run-of-the-mill statically typed language this change can via domino effect impact parts of the code that don’t directly depend on it. Now typically in OOD this would trip the “single-responsibility principle” and “interface segregation principle” alarm - but functions in a function composition only accept one single argument which tends to be of some aggregate type.

So often the “data payload/result/context” is simply re-modeled as a map/dictionary with string keys and possibly string values (which have to be re-parsed by any interested party) - essentially bypassing static typing anyway.

Now I haven’t had any exposure to OCaml directly but I have been in fits and starts keeping tabs on F# as an alternative to C# since 2003 - the most notable effort during the MEAP of Real-World Functional Programming. In retrospect I have to say that for me personally the OCaml syntax looks way too imperative (rather than declarative) - so it never put a dent in the armor of my imperative mindset - it took Clojure to do that. Maybe now I might view OCaml/F# in a different light.

This is actually a fascinating talk that I’ve done a few times in the past. :smiley:

Now, take C++ or OCaml, as those are the two ‘well-typed’ or better languages that I know (Java’s type system… is a joke, I can describe why if curious), if you change a type, say some value from an integer type to a string type, or adding or removing a field in some kind of structure, compiling only fails on those areas that actively use those parts, which make it clear where exactly you need to fix since you refactored the type. Compare to a dynamically typed system (or even some statically typed but weakly typed systems, like C in some cases) where this will silently break lots of areas while still compiling. As well as I’ve never fallen back on to a dictionary or similar type to store data that should be typed. I will go through pains to make sure that everything stays as typed as possible (which is mostly a thing in C++, you can make it very well typed, though it is easy not to do that as well and fall back to breakable C’isms). As an example (if I read your paragraph right, please correct me if not ^.^), if I change such a part of a structure, then any function that operates only on other parts are unaffected. Operating through library bounds should always have a well defined and versioned API as well (no leaking innards!). But in C++, OCaml, even Java, I’ve never fallen back to implementing a data payload/result/etc as a map/dictionary with generic keys and values. I convert from user or remote server to properly typed internals absolutely immediately (I could point to some excellent C++ libraries if you are curious on how to do this well). :slight_smile:

F# is modeled heavily on OCaml, it basically is OCaml on .NET (but with little good functional package system since most .NET libraries that are well used are poorly designed, but it works around it well). Basically, if you ignore the horrible .NET’isms, then if you know F# then you know OCaml, they are very close. OCaml still has a better type resolution system and F# (last I saw?) still entirely lacked the insane power of the First-Class OCaml Module system; modules in OCaml are like modules in most things, like namespaces in Java/C++/.NET/Etc… or individual compiling units or whatever, except they can be generated in-line, passed around like values, changed (thus returning a new one of the change), etc… The usual example of OCaml’s module system is just using it like integers and operating on them, except they can carry their own type information (this is how OCaml can replicate Haskell’s HKT’s and such, though technicalliy OCaml modules are a little bit more wordy than HKT’s, they are, however, far more powerful than HKT’s as they can do far more than just replicate HKT’s or typeclasses or so):

(* Taken from one of the OCaml books: *)

(* And do note, this is the insane high-level stuff that mainly just library
	 authors deal with. *)

(* First lets define a module type (not an instanced module, just a type),
   and for succinctness I will fully type it to 'int' to demonstrate how
   we can operate on modules to make new modules *)
module type X_int = sig val x : int end

(* Now lets make a 'functor', basically a module-level function that operates on
   and can return modules.  This one will just take a module that fulfills the
  `X_int` type and return a new one that also fulfills the `X_int` type. *)
module Increment (M : X_int) : X_int = struct
    let x = M.x + 1 (* Incrementing the x in the module and returning it as a new module! *)
  end

(* Now lets define a module that fulfills the `X_int` interface: *)
module Three = struct let x = 3 end

(* And now let's create a new module of the prior that is incremented: *)
module Four = Increment(Three)

(* You can increment on any module that fulfills the interface of `X_int`: *)
module Three_and_more = struct
    let x = 3
    let y = "three"
  end

(* This will return a module with just the `x`, although there are ways around
(* that too to return the original type: *)
module Four = Increment(Three_and_more)

(* For more examples see:  https://realworldocaml.org/v1/en/html/functors.html *)


(* Now the above is just modules working on modules, that does not make modules
   first-class, rather it makes them, useful and unique and powerful, but not
   first-class.  An example of the above with modules as actual first-class
   values that can be passed around functions, again starting with: *)
module type X_int = sig val x : int end

(* We define the Three module again (we could do it inline below too, but being
   explicit here: *)
module Three : X_int = struct let x = 3 end

(* Lets turn it into a first class module, carry it around 'as' a value: *)
let three = (module Three : X_int)
(* To do so we assign it a specific module type. *)

(* Say we have Four again: *)
module Four = struct let x = 4 end

(* We can even make a list of modules of the same signature but different
	 implementations *)
let numbers = [ three; (module Four) ]
(* We did not need to set module Four to `X_int` since it is already known what
	 type it should be by that point since Three was already typed, so typing Four
	 was optiional *)

(* Same as above, but as an anonymous module: *)
let numbers = [three; (module struct let x = 4 end)]

(* Now the above could be pretty succinct you think, however to access the
	 internals of a first-class module it has to be 'unpacked', done via putting
	 first class module as a `var` to a module, like: *)
module New_three = (val three : X_int)
(* `New_three.x` will equal `3` *)

(* Let's wrap that action into a function to make it simple, and showing how we
   can access a module passed through a function: *)
let to_int (module M : X_int) = (* We unpack the first-class module here, within the erlang'y/elixir'y style pattern match *)
	M.x (* And access it here, one-line longer than similar Haskell, meh *)

(* We can operate over first-class modules and return them too *)
let plus m1 m2 =
  (module struct
     let x = to_int m1 + to_int m2
   end : X_int)

(* Easy to use as any other value *)
let mod_six = plus three three

let twelve = to_int (List.fold_left plus mod_six [three;three])

(* However this only shows how you can send pre-typed modules around and operate
	 on them, useful sure, but not HKT's kind of power, so let's make: *)
module type Bumpable = sig
	type t
	val bump : t -> t
end

(* The above makes a generic type `t` that will be defined by the implementors
	 of the signature, but must also define a function `bump` that takes a `t` and
	 returns a `t`.  So let's make a couple here of a couple different types: *)
module Int_bumper = struct
	type t = int
	let bump n = n + 1
end

module Float_bumper = struct
	type t = float
	let bump n = n +. 1.0
end

(* And let's convert them to first-class modules so we can pass them around, we
   are exposing the type in the signature via `with`, usually done by whatever
	 library actually makes these modules of course, you can hide the type too,
	 thus preventing users from accessing the innards directly, just leave out the
	 `with` and everything after it on the signature: *)
let int_bumper = (module Int_bumper : Bumpable with type t = int)
let float_bumper = (module Float_bumper : Bumpable with type t = float)

(* Now lets define a function that can work on any type of the appropriate
	 module signature (like a typeclass in haskell, this is HKT territory in
	 Haskell, impossible to do in, say, Elm and such) *)
(* Pattern matching everything on the argument list, do note that (type blah) is
   defining an unknown type in a function signature and is not something passed
	 in, this function will take two arguments, a first-class module that is a
	 `Bumpable` of any type, and a list of those types *)
let bump_list (type a) (module B : Bumpable with type t = a) (l: a list) =
  List.map B.bump l (* Mapping over the arguments *)

(* And using it: *)
let listOfIntsOf_2_3_4 = bump_list int_bumper [1; 2; 3]
(* Result:  [2; 3; 4] *)

let listOfFloatsOf_2point5_3point5_4point5 = bump_list float_bumper [1.5; 2.5; 3.5]
(* Result:  [2.5; 3.5; 4.5] *)

(* The part you pass in to the generic function that identifies the 'types' is
   called a `witness` in the OCaml world, but this is how you can do
	 Haskell-like HKT's in OCaml.  The upcoming OCaml proposal that adds implicits
	 would keep most of the above the same except you define an `implicit module`
	 instead of just `module` and when in the same scope it will be transparently
	 brought in to a function like `bump_list` (if its module sig is defined as
	 `implicit` too) to make it like this as it will get the module from the
	 scope, but yes you can pass the first-class module as deep as you need as: *)
let listOfIntsOf_2_3_4 = bump_list [1; 2; 3]
(* Result:  [2; 3; 4] *)

let listOfFloatsOf_2point5_3point5_4point5 = bump_list [1.5; 2.5; 3.5]
(* Result:  [2.5; 3.5; 4.5] *)

You can entirely emulate the Haskell typeclass system and HKT’s with OCaml modules or class types, and there is a motion to get a new ‘implicit’ feature in the language that would make it almost as transparent to use as Haskell’s version as well, except of course significantly faster to compile. ^.^

But yeah, OCaml, first class modules, objects, etc, are insanely powerful, and combined with PPX’s OCaml can get any feature that any other language has, LISP’y in power, but a bit more in syntax. :slight_smile:

1 Like

However in C++ it happens all too easily that seemingly innocuous changes launched a cascade of massive re-compiles (which ultimately impacted “binary compatibility” as well) - unless one was hyper-vigilant about using compilation firewalls (pImpl idiom) everywhere (i.e. more mental tax) - which all too often wasn’t the case if the code authors were of the “C++ as a better C” persuasion. The lack of compilation firewalls could also make it extremely difficult to put legacy code under unit test.

I like the way module Three didn’t have to declare it’s X_int-ness. Though for int_bumper and float_bumper it looks like a cast is going on. Certainly seems like OCaml’s module system has a bit of a different way of doing things.

Have you come across Jon Harrop’s opinion regarding Reason? Yikes.

Oh so true! That is why I compile only subsets of files at a time to make sure they succeed before I continue to a full compile. I used to use the pImpl idiom well but not as well as I used to recently… >.>

Not a cast, casts are done via <:, it is just saying that ‘I want to access this module via this signature’ and any module will fit if it fulfills the signature (or more is fine). You could not, for example, cast it to another signature later. But yes OCaml’s module system is…unique is putting it mildly, but amazingly powerful. ^.^

Ooo, I had not, reading, and my comments:

No IDE. I use Merlin and ocp-indent in Emacs to write OCaml code and it sucks. Crashes and hangs ~50x per day. Autocompletion is worthless and there’s no integrated documentation. As a metalanguage, OCaml should be built in harmony with its own IDE written in itself. As is, OCaml development is like pulling teeth compared to F# in Visual Studio.

He sounds like an MS-dev to be honest, looking for a mouse-driven IDE. I’ve never experienced the crashes or hangs he speaks of, autocomplete is, well, beyond amazing for me (I type a single character and it pops up a list of anything that makes sense in that context for the types involved, I’m always amazed at how well it guesses what I want it to be and will be at the top of the list, Visual Studio has never been even close to that accurate even with Visual Assist).

No support for multicore programming.

This I do concede, although there are libraries that work around it well and the compiler has made quite good strides recently at helping this along.

Sucky uniform data representation means almost everything gets boxed, you have weird 31- or 63-bit integers, many basic types are missing (int16, float32, unicode etc.) and the language is incapable of expressing an efficient generic hash table.

Eh, it is not as bad as he says about boxing, the compiler is fantastic at unboxing things during optimization (which is wonderfully visible in output javascript unlike machine code ^.^). The lack of int16 and float32 and unicode is a bit odd and I do wish they existed at times, but the javascript output of bucklescript fits into javascripts world very well (which only has float64 and int32 anyway as supported types without using things like typedarrays).

Gaunt stdlib and many incompatible and incomplete replacements. You can make a set of strings with Set.Make(String) but not with integers because there is no Int module built-in. You can get some of the functionality you need with Batteries Included exclusive or Annexlib exclusive or Jane St Core exclusive or…

The stdlib is very sparse, but that is on purpose as they do not want to force an entire style when it might not be useful for a situation. However, there are two big libraries, Batteries and Core that he mentioned, that are ubiquitous now and work together (Batteries is strictly a subset of Core and remains compatible with Core, Core is… larger in scope, even then neither are necessary, Annexlib I think is dead).

Interpreted REPL. OCaml doesn’t have a JIT compiler. Instead it provides incompatible compilers that either batch compile to native code (no REPL) or interpret bytecode (the REPL). For example, they evaluate function arguments in different orders (left to right for ocamlc and right to left for ocamlopt). There have been native code REPLs but they were unstable and have been abandoned.

OCaml is a compiler, like C or C++, which happens to have a repl (though there are C++ and such repl’s too to be technical, there is a fantastic clang-based one I use at times), but no, it is not a JIT or a VM or anything of the sort, it is a compiled language, not Python or .NET stupidity. And the function arguments are an oddity between the two, but that is only an issue if you are doing order dependent operations within a single expression, which should not be done (as in C++, C++ is random-order arguments as well, and like C++ that is for efficiency on machine code reasons as it is a compiled language designed for speed).

Overall he seems like he is trying to treat OCaml like a VM language instead of a compiled language, that is not OCaml’s purpose or reason for being but rather it is for being a fully type-safe and fast baremetal language, and that shows being that it is on average within twice the speed of C/C++ and can match it as well if programmed for that style. It does not need any massive runtime like .NET or JVM stupidity, compiled to a single distributable binary, easy to cross-compile for other platforms, etc. It is a typed high-level language for a low-level world, which is something that .NET, JVM, Python, etc… do not even approach.

Confronted with all of these major problems Facebook decided to give OCaml a facelift. They didn’t even attempt to address any of the real issues. In fact, they gave OCaml a new user interface in the 21st century that is literally ASCII art in a terminal window:

Yeah I agree there, the only thing Facebook did was add in extra curly braces and added asciiart to the repl, and add JSX into it (ugh…), only one or two small changes I agree with, but it fits into Facebook’s interests, not mine. :wink:

1 Like