Advice for developing a library with Elixir and JavaScript parts. API/Documentation/Publishing/Redux

I am developing a library that unusually has a significant client part. In summary GenBrowser aims to give clients/browser an identifier that can be used to send messages to/from/between clients. In essence a pid.

For example

# browser 1
const { address, mailbox, send } = await GenBrowser.start('http://gen_browser.dev')
console.log(address)

# server
iex> {:ok, client} = GenBrowser.decode_address(address)
iex> GenBrowser.send(client, %{text: "From the server"})

# browser 2
send(address, {text: 'From browser 2'})

# browser 1
var message1 = await mailbox.receive({timeout: 2000})
message1.text
# From the server
var message2 = await mailbox.receive({timeout: 2000})
message2.text
# From browser 2

The security model relies on signing addresses that are sent out of the server, that is why the signed address needs decoding on the server.

I am not very up to date on the front end world and so want some advice on how to proceed with this project.
Ideally I want to keep the whole thing in one project and use as much of the Elixir ecosystem as possible. However that might be limiting.

Advice on the JavaScript API

There are two options for working with messages received.

  1. mailbox.receive() that takes an optional timeout an returns a promise that completes on the next message
  2. mailbox.setHandler(messageCallback, closeCallback) The first callback is called whenever a message is received, the second when the mailbox has been closed permanently.

These names api’s come from their erlang world equivalent receive and handle_*. They look reasonably sensible in the JavaScript world but could probably be more idiomatic

Npm publishing

The project has a JavaScript build step and the code is always to be used in a client.

  • Would you expect this to be available on npm as well as a CDN
  • If so should only the source (or only the bundle) be published to npm

JS Documentation

ExDoc has spoiled me for ease of setting up documentation. In these cases I am probably just looking for the most standard/simple way of doing things

  • What is the recommended way to document a JavaScript library?
  • Is there a way to integrate this documentation into the hex documentation?

Redux as Actors

With my (limited) knowledge of redux I think that redux and GenServers look quite similar.
Hence the name of this project.
Do you think this is a helpful analogy when describing processes to JavaScript developers.

Within a single process or store there is a single state tree.
I often say that sending a message is like dispatching on a remote store.

Is there any better way of explaining things? Should I just say actor model and not confuse the issue with mentioning redux

1 Like

You could take a look at how phoenix handles their js dependency.

  • It’s on npm, published out if the elixir repo
  • It has it’s javascript docs on hexdocs.pm

As for the API:
I’d expect a callback driven interface. Promises are better for async results for a single action/task.

1 Like

Given the nature of a mailbox as a potential source of an infinite stream of messages (events) a more contemporary interface like Observable seems more appropriate (active implementations RxJS, xstream, Bacon.js, Kefir) - Learning Observable By Building Observable.

While I realize that await is a popular construct to make asynchronous code look more sequential, I believe that this is ultimately barking up the wrong tree given the mercilessly single-threaded nature of JavaScript runtimes.

On the BEAM strictly sequential code makes sense given that you can have millions of tiny processes for concurrency. Pretending on a single threaded platform that sequential flow of control programming is sufficient for more complex interactions will in my opinion ultimately run into a brick wall.

The real answer is to start writing code that composes event streams and let the underlying platform schedule what gets processed when.

Can you please expand on this line of thinking - i.e. what has lead you to this conclusion?

  • The Redux implementation was inspired by the Flux Architecture
  • In the end the Redux analogy may not be that helpful to that many people. It is my impression that Redux adoption may have peaked since the introduction of the new Context API and since more people have started to rely on GraphQL clients (lifting whatever was left of their state up) and of course the general notion that You Might Not Need Redux. People may have adopted Redux for very different reasons and some are now only holding out because they aren’t yet ready to let go of the concomitant development tooling.
3 Likes

Redux is more like Agent since most of the effectful operations are strongly discouraged.
Most of the effects are handled with redux-saga or thunk. Redux-saga is a little bit like a worker, but there’s no mailbox.

The key difference is Redux simplified everything by dispatching the event to every possible listener, so there’s no such thing like pid at all. And it’s acceptable because in front-end we usually don’t have too much data.

And also all redux operation happens in a synchronous manner so there’s no need for a mailbox.

1 Like

Yeah so far I have used both. something listening for pings using the callback approach to send pongs.
But on the other side after sending a single ping the promise makes sense because it is waiting for a single pong

I though that this could all be built on top of a callback interface

I thought when using await it was essentially syntactic sugar for the same behaviour as a promise. i.e. other callbacks etc would work as normal

For efficient streaming you need three callback functions (RxJS 6).

  • next(value:T) => void
  • error(error:any) => void
  • complete() => void (i.e. when the other end chooses to close the stream for a non-error reason).

I thought when using await it was essentially syntactic sugar for the same behaviour as a promise.

It is but (apart that I find async/await a bit of a tarpit with regards to refactoring) once you start using Observables, Promises come across as one-shot streams (when they resolve they deliver a single value and complete) and the async/await syntactic sugar becomes a “cul-de-sac” with nowhere to go when you need to move to streams.

So the first thing I had to do was to strip all the async/await out of the end user code before I could even think of introducing Observables/Streams.

That can be done. I’m most interested in what would be a good foundational API, because I know there are 100 different flavours of how to JavaScripts and there is no reason for this project to favour one over the other.

Thanks for the Gist that’s awesome. Rather a lot to lead a README with but I think I could certainly update the actual examples.

Well, that wasn’t really my intention. Those were simply the files that I changed to stage the next phase - introducing RxJS 6. So the next set of changed files are here:
gen-browser Pinger/Ponger refactor part 2 - enter RxJS 6 · GitHub

Connect.js is the module that I cobbled together in an attempt to wrap the current client API. It’s still rather simplistic given that only one single attempt is made to start a client.

Thinking out aloud here for an improved version:

It’s in the nature of Observables that once an error occurs the Observable is done and junk. So if the client interface ever experiences an error it makes sense to trash it and start over. So it makes sense to model:

  • a primary stream of client interface values - i.e. whenever a client interface experiences an error, a new working one has to be acquired. That means there has to be a way to “clean up” the failed interface (e.g. including the notifying the server if necessary) - and acquire a new working client interface.
  • every new client interface (value) is the basis for creating a new message observable. So in effect a stream of client interface values is transformed into a “stream of message streams” (not to be confused with a “stream of messages”).
  • The switchMap operator can then be used transform that “stream of message streams” into a single “stream of messages”. In essence there should be a “stream of messages” that is entirely oblivious to failures of the client interfaces as long as fresh ones can be acquired.

Some points:

  • There doesn’t currently seem to be a way to “close” (i.e. discard) the client interface. Closing the mailbox seems to simply call the registered close handler - meanwhile the underlying EventSource doesn’t seem to be closed.

  • There currently don’t seem to be any opportunities for errors after the client interface has started. That makes me wonder whether there are places where errors are being thrown and not being converted to error values - i.e. ultimately it’s important that there is an error handler and that all errors are channeled towards it. Then there is the classification of the errors. Do all errors mean that a new client interface needs to be acquired or are there other less severe errors?

  • The reason for Promise.reject should be an Error, not just a string.

MDN Promise.reject - Description:

i.e.

reject('Server emitted incorrect first event')

should be

reject(new Error('Server emitted incorrect first event'))

etc.

I’m most interested in what would be a good foundational API

Understood, my approach was to use one of the more sophisticated methods to see if there are any glaring shortcomings. At this point I’m wondering how difficult it would be to deal with the EventSource directly.

The base API would need to accept an error handler (there needs to be some notion of the desirable action after any particular error) and there needs to be a way to cleanly close/dispose of the interface regardless of errors.

It might be an idea to make a Promise based API a separate, optional npm package (which uses the base callback API). That way it should be easier to bypass any unnecessary functionality.

1 Like

Not that difficult, it has a callback based API. So there is probably only limited value in using my layer that probably just obscures the underlying interface.

This is good point it is missing. So far my usecase has only been to discard the client when the browser page is closed, however I think it is something that is worth adding.

mailbox.close is really only to be used by the client code when it looses connection.

I only consider it an error when a reconnection fails, loosing a connection is just a natural occurance.
The setHander function does take two callbacks the second one being called on close

So here is what I came up with (eliminating the need for mailbox.js, promiseTimeout.js and client.js):
gen-browser Pinger/Ponger refactor part 3 - RxJS 6 on Top of EventSource · GitHub

In hindsight it would have been useful to have something like this:

for a no-nonsense piece of code clearly illuminating all aspects of the interface from the browser point of view.

I’m not sure that providing a definitive JS library is the way to go - a demonstration one (or two) sure. For maximum flexibility it is necessary to lay bare all the options made available via fetch and EventSource.

I don’t think you’re in the market to maintain an ultra-flexible (read ultra-complex and bloated) JS library that will accommodate all the numerous edge cases for connection options that people may want.

1 Like

Just thinking out aloud:

  • The first message to the EventSource is “special”. Given that the EventSource is opened with a URL the open event is the first response but that is standardized. So the first message is the first opportunity to return client specific details.

  • One thing I’m wondering is whether the hard requirement for a “first special message” may make it more difficult to use a more generic library built on top of EventSource with this server protocol.

An alternate means could be:

  • Place a regular fetch to /mailbox. The response contains URLs for

    • creating the EventSource
    • sending messages
    • logging
  • Then create the EventSource with the provided URL (and the first message doesn’t have to be special). A timeout against the open event can by used to ensure that the connection is established.

Tradeoff: The additional fetch before creating the EventSource. But there is the added bonus that there only is one root URL /mailbox - the URLs for sending, logging and creating the EventSource are completely under the server’s control.

Your quite right here. Perhaps having something that stands as a demo is the way to go. And then I can also reference this implementation for an Rx specific way to go.

Re your alternative. I like it the goal. But the EventSource API doesn’t allow sending headers, so in the two step case you described above the first fetch must return some kind of token that can be used to start streaming the mailbox. If this was part of the path that is less secure, because servers often log request paths.
I.e. it’s easier for a hack to start streaming someone else mailbox.

The current system with special message uses getting a mailbox to always generate a new one, and uses the built in last-event-id structure to authorize in case of reconnection.

Would it be helpful if the message came through a different callback. The EventSource interface does allow types. so the message wouldn’t have to come in the same eventlistener. i.e.

eventSource.addEventListener('special', doSetup)
1 Like

I have moved the client code in the project to a single dir. The project could have the code for more than one npm package, if you are interested we could add /rx_client. In the future it might make sense as a separate package/repo but while under development could be productive in the same repo.

Somehow your explanation jostled something loose. The result is a less boilerplate-y version of ServerEvents.js: gen-browser Pinger/Ponger refactor part 4 - Using RxJS fromEvent, using, etc. to remove some boilerplate · GitHub

we could add /rx_client

Hmmm … testing, ESLint, etc …

1 Like

Painful reminder why I dislike mutability by default (which TypeScript does not fix).

test('stuff', t => {
  let listeners = new Map()
  let source = {
    addEventListener(type, listener) {
      listeners.set(listener, listener)
      console.info('added')
    },
    removeEventListener(type, listener) {
      if(listeners.delete(listener)) {
        console.info('removed')
      }
    },
    send (e) {
      console.info('size: %i', listeners.size)
      listeners.forEach((listener, _key, _map) => listener(e)) // !!!MUTABILITY!!!
      /* FIX:
      let handlers = Array.from(listeners.values())
      handlers.forEach((listener, _index, _array) => listener(e))
      */
      console.info('sent %s', e)
    }
  }

  let nextSource = fromEvent(source,'').pipe(take(1))
  var nextSub

  const subNext = () => {
    nextSub = nextSource.subscribe({
      next (value) {
        console.info('next %s', value)
      },
      error (err) {
        console.error('next Error %o', err)
      },
      complete () {
        console.info('next COMPLETE')
      }
    })
  }

  let sub = fromEvent(source,'').pipe(take(1)).subscribe({
    next (value) {
      console.info('first %s', value)
    },
    error (err) {
      console.error('first Error %o',err)
    },
    complete () {
      console.info('first COMPLETE')
      subNext() // gets 'next 1' with uncopied listeners
    }
  })

  source.send('1')
  source.send('2')

  t.pass()
})

Before fix:

> ava


added
size: 1
first 1
first COMPLETE
added
removed
next 1
next COMPLETE
removed
sent 1
size: 0
sent 2
  âś” stuff

  1 test passed

i.e. the next handler installs itself before the dispatch loop processing the first event has completed - so the next handler sees the first event.

After fix:

> ava


added
size: 1
first 1
first COMPLETE
added
removed
sent 1
size: 1
next 2
next COMPLETE
removed
sent 2
  âś” stuff

  1 test passed
1 Like