54) ElixirConf US 2018 – Closing Keynote – Chris McCord

I am OK with this restriction, assuming it only applies at the top level. Is this straight-forward to detect though?

EDIT: This is actually a great idea! Good job! :smiley:

3 Likes

This is not a restriction! This is the result of an algorithm that transforms the AST into an equivalent AST with the variable bindings at the top. The goal is for the user to write a normal template (the template above) and all the rest is done by the template compiler. The goal of having this complex compilation step is to remove all restrictions from the templates.

2 Likes

Oh I see. You are saying templates can be fully compiled in this fashion. That’s an even better idea, yes. I don’t think we need to do this an extra pass, we can probably change the engine to emit this code from the beginning.

1 Like

Yes, you can try that. If I were to follow that path (I’ve actually tried to do it for about 30mins or something, and it’s not trivial), I’d do the following:

  1. Don’t try to produce a valid expression at all steps. You probably need to keep some context of what variables you have already bound and stuff like that

  2. Instead of “folding over a quoted expression” (which is what an engine currently does), fold over a more complex struct (e.g. %Context{}), which keeps the metadata you need. You do have some experience in writing compilers, so you probably know what you need (NimbleParsec, in particular, is a work of art, although I can’t understand much of the source)

  3. You’ll probably need some minimal postprocessing anyway.

I can help, but I can’t commit to deadlines: I might have something ready in 1 day or in 1 month. And in any case, I’d like to go the way of rewriting the already compiled template according to a “typical” engine (nothing special, just personal preference).

If you want to try to build the template as you parse the EEx template, I suggest you go that way while I try to transform the already compiled template. In any case, the optimum approach will probably take a bit from both perspectives.

I have a sandbox project named PhoenixUndeadView where I try some approaches. I’ll publish it today or tomorrow so you can see for yourself what I’m doing (not much, currently). I’ve started a topic to sorta blog about it, but most (here), but most of the interesting discussion is in this topic :slight_smile:

5 Likes

Yup. I think this is the best approach indeed. I am just not sure about the interplay between handle_begin + handle_end and handle_expr but it should be doable. We should probably require Elixir v1.5 too, as doing a metadata approaches requires handle_body which does not exist in earlier versions.

1 Like

RANT (tangentially related to what we’re discussing)

The Elixir and Erlang compilers seem to leave out a lot of easy optimizations that other compilers perform (e.g. Java’s bytecode compiler, Haskell, OCaml) perform. I don’t know how big the results would be in practice, but currently it’s a little hard to experiment with Elixir’s compilation, because the compiler doesn’t have any extension points that I know.

It would be interesting to have a some kind of Elixir compiler in Elixir (even if it doesn’t compile directly to Erlang), which could be hooked into certain parts of the compilation pipeline. It would make it easy to perform radical AST rewritings.

Well, Drab does almost the same, your approach is even better because I replace all the content. Like in this example:

  <strong><%= @title %>:</strong><br>
  <%= for user <- @users do %>
    Username: <%= user %> <br>
  <% end %>

After processing with Drab Engine, the html would look like:

  <strong drab-ampere="1234"><%= @title %>:</strong><br>
  <span drab-ampere="5678"><%= for user <- @users do %>
    Username: <%= user %> <br>
  <% end %></span>

So in the simple case, when you update @title, it would just update innerHTML of the <strong drab-ampere="1234">, but when you update @users collection, it must update the whole <span drab-ampere="5678">. Doing the diff on the client side is a great idea, and it solves the problem when, for example, system needs to replace the whole form_for and refreshes the <input>s so the value user already set is gone.

In Drab, the server-side data is only updated when the client successfully updates. In the other case, poke returns {:error, description}. Server-side data is rather a cache, I treat client as a data source.

The other issue is a broadcast change. At the beginning, I did not want to build the broadcasting version of the poke, as the browsers may have different data at the same time. But the community convinced me to do it, but it is limited. There were more concepts, like re-sending back to the server changes coming from the other browsers, or send the changes to all Drab servers in the server, but I’ve found them overcomplicated.

3 Likes

Interesting! This is a big distinction then. I know Chris is exploring some ideas and I have some thoughts as well. My latest idea is to store only “params” on the client and you can update those parameters with LiveView. So for example, if you want to do something like pagination, you would start with the parameter set to nil (or zero) and every time you change the page you could update the parameters in the client with a data push (or change the URL - but that’s a separate discussion). So you keep in the client only what is necessary to tell the server where it was. Not much different from the usual request/response life-cycle.

Thoughts?

Agreed. It is always tricky, especially if you consider what will happen on disconnects and so on.

2 Likes

I think this problem is a lot less difficult to solve if you keep all of your state on the server. That way when something inevitably goes wrong and the user refreshes their page, every issue is solved from a known good state

edit: well not that you can’t mess up server-side state, but going over the internet is certainly going to expose way more super rare bugs than dealing with your code all running on one node or inside a relatively much more stable local network

3 Likes

Let’s not lose track of what the goal with Drab or LiveView is: the “gold standard” is always the lovingly crafted client side framework with both custom javascript and server side rendering (it really helps if you’re using JS on the server too). That way you can be smart about which data you send, how to deal with disconnect, etc.

These solutions are not operating on the same space. Drab and LiveView are a kind of a hack to simplify development (at least LiveView was explicitly presented as such).

If you start to have a smart client and at the same time the ability to push changes from the server, you now have a distributed system and your code is automatically orders of magnitude more complex for things to work.

2 Likes

That would simplify the implementation, but IMO is not enough. The power of living assigns (LIveView, Drab.Live) is that you can do everything what you could do with normal action in the controller, but live.

IMO is more difficult :slight_smile: - vide: handling disconnections.
Now, I have a drab genserver linked with channel process. In case of disconnection, it just dies, I don’t care about cleaning up stuff etc. After reconnect, new process gathers the data from the client, so if anything were changed before disconnection, we have it in a new process.
Keeping on the server means that your data must persist the disconnection. So you need to keep it somewhere, handling timeouts. For me that was more difficult!

2 Likes

This reminds me the idea which was never implemented, but it might be good to remind (there are some notes about it in Drab thread on this forum, but it is too long to be useful as a data source).

What happens while disconnection during handler process running

In normal case, the event handler process (the one which runs when you operate the JS even with drab-click=handler_name) dies, as it is linked to the drab genserver, which is linked to the channel process.
If you want your event handler to persist, the best way is to run the background process and pass the socket to it. Imagine that disconnection happens when this background process operates:

poke socket, status: "Starting DB update..."
# perform long DB operation
poke socket, status: "DB update finished."
# more

poke returns {:error, :disconnected} in case of disconnection so you should handle it and react accordingly.

But what if, during the long DB operation, the client will disconnect and reconnect again? In this case, Drab still return :disconnected on the second poke, as after reconnection the brand new socket is used.

Idea: should Drab handle this?

The idea was to use the unique browser id to identify the connections

  • sockets are stored somewhere on the server and linked to the browser IDs
  • instead of socket, all Drab functions receives own data structure (%Drut{browser_id: binary}) with the ID
  • normal event handlers operates as previously, but introduce the new event handler function, the task, which is not linked with channel process anymore
  • on disconnect, Drab removes the socket from the global browser_id => [socket] storage (note that it may be more than 1 socket for the browser)
  • Drab functions returns {:error, :disconnected} only when there is no more sockets on the list for this id

After this, Drab functions may be changed to the way that they queue changes to the frontend. Consider:

poke drut, status: "Starting DB update..."
# perform long DB operation
poke drut, status: "Step 1 finished."
# perform long DB operation
poke drut, status: "Step 2 finished."
# perform long DB operation
poke drut, status: "All steps completed."
# more

Let’s say we don’t really care if the status was updated - db changes are more important - so we do not check the poke return value. All the changes will be applied when the browser reconnects. You could even close the browser and, after the next connection, have a correct status of the job.

Of course in this case we need to have a cleaning mechanism, with timeouts and so on.

I’ve announced this idea here before, but the answer of the community was - not worth to do it, if you want to have it, handle it yourself. So I postponed the implementation to the future versions of Drab. But I still like the idea. Imagine you are coming from the other world (lets say Rails) and have persistent task with live status updates for free with that ease.

2 Likes

in this scenario you’re treating a client as being more reliable than a GenServer. With a GenServer you control the environment and the code - you do not control that with the client. You can make environments you control as robust as necessary, while you have very little control over how reliable some client is going to be.

not necessarily, one of the ways I handle websocket disconnections in texas is by simply falling back HTTP requests. That means I don’t have to make any assumptions about the client on disconnect, I just clean everything up as if they intended on disconnecting - at any point in the future the client can state their intent of continued use by sending another request, which you can use to re-setup their environment as if they were a new client all together.

This does mean that if they have a bad connection they will not see updates pushed up from the server, but if they have a bad connection that might be the case no matter what your strategy is for handling poor connections

2 Likes

Ah I thought form_for’s you special cased to allow more detailed setting? Guess that’s why I never noticed it since it disables the whole thing when it is testing so I generally used unpoly for real-time error checks, that would be a good feature to get in to drab though!

This was how Wt in C++ worked, the client was treated as literally nothing more but a viewport and an input event dispatcher back to the server, but it means you have to hold a lot more active data in memory as well.


@grych You could also add another process/actor between the socket and the rest of the system to cache everything for any specific page view (identified by a unique UUID or token or so?) perhaps? It just passes on most things and cache’s them as necessary until it gets confirmation that the client is up to date with that point?

1 Like

I’m curious how this holds up in production - right now it’s not a problem I’m trying to solve because texas keeps a cached version of only the dynamic parts in AST so it’s negligible how much memory is used until you have many connected clients, but even with a huge number of clients those cached bits are broken apart and labeled, so each part could just as easily point towards a [newest state @ xyz time] (a real impl wouldn’t use time as the uid) that all N clients cache points too and then on update the first client to update creates a new [newest state @xyz+1] for the rest of the clients to point too once they’re done sending updates to the client

so it certainly seems there are compression methods available for the cache held on the serverside, but in practice I wonder if this is something that’s likely to ever be needed

1 Like

In Wt in C++ the ‘GUI’ overhead was similar to a desktop GUI program, a set of widgets you put together and so forth (it very owned it’s webpage), but it was still quite lightweight as desktop programs often are when using base interfaces like that, however holding your own data in the whole setup could definitely get a bit…heavy at times, even to just know the state of it all. Wt had issues scaling above 10k connections on a single server (although this was back in the mid-2000’s when I used it heavily), and it wasn’t even because of connection overhead but rather just memory.

If the server-side of the DOM is broken up well enough and it only remembers the dynamic parts including recursively (since templates can call templates and so forth) including that other templates can be changed and loaded over time then it could indeed be made very efficient just by holding on to a set of minimal observation mappings. Thus I’m quite curious to see what a modern style of server-ownership could do compared to the old style event system of the old Wt GUI style API’s, it could work very well. :slight_smile:

As it’s said, You don’t Need it Until you Do. ^.^

1 Like

this is something that I think is possible in texas, just not impl’d yet - if a sub-template is dynamic (think facebook posts on a timeline) then the top-level dynamic property (the timeline) could just cache a list of HTML-AST and on a top level diff that would allow me to skip tons of sub-templates (and their sub-templates as a consequence) to get a diff to the overall structure - it would also give me pretty easy targets for each client on the same page referencing the same cache (for the parts that don’t have context specific rendering rules)

2 Likes

Yes, I’ve been thinking about it, but it was in the long-term plans to Drab 2.0. I’d like to combine this with reconnectproof functions which I described before. Maybe Drab should evolve to this direction :slight_smile:

1 Like

I see. I like it!
How would you implement long running processes in Texas? For example, like this one here. The similar stuff was my first usecase to create Drab.

1 Like

Guys the things Chris mentioned about server side, error comings when filling the form, is available in phoenix gh now?