Best way to handle Custom Element events in LiveView?

Hey folks! I’m the author of Lustre which is a frontend framework written in Gleam. Lustre has the ability to run components as native Custom Elements with support for all the good stuff like attributes, properties, custom events, form association, …

I’m currently working on a headless ui library and I’d like to document and encourage its use in Phoenix and LiveView apps, but I’m not LV expert and it’s a little unclear to me how one is supposed to handle events emitted by a custom element. I found a thread from last year that asks a similar question but it looks like the conversation fizzled out.

As an example, the following is a hypothetical menu that supports both keyboard and pointer interaction. When a menu item is selected (by any means) the menu will emit an event - "lustre-ui:select" - with a payload containing the value of the item selected.

<lustre-ui-menu>
  <lustre-ui-menu-item value="ignore">
    Ignore
  </lustre-ui-menu-item>
  <lustre-ui-menu-item value="block" class="danger">
    Block
  </lustre-ui-menu-item>
  <lustre-ui-menu-item value="report" class="danger">
    Report user profile
  </lustre-ui-menu-item>
</lustre-ui-menu>

In a typical client app, you would attach an event listener to the <lustre-ui-menu> element and perform whatever associated action when an item was selected, but of course what we’d like to do is send a message to our LiveView process some how.

As I understand it, LiveView only supports a fixed and predetermined set of native events for things like phx-click. What would be the recommended way to work with custom elements that emit non-standard events in LV?

Thanks y’all! :folded_hands:

I’d argue the big “unknown” with webcomponents is the question if the emitted events are actually intended to be sent to the server (vs. meant to used client side). The current solution would be to have a liveview js hook somewhere up the tree, which would attach an event listener explicitly for events and use this.pushEvent in the hook to forward the data to the server.

As for (form-)inputs the best way is imo to implement ElementInternals - Web APIs | MDN to make the webcomponent a proper form enabled component. Those should then work out of the box with the form abstractions of liveview.

2 Likes

Hey @hayleigh I would suggest you take a look at GitHub - launchscout/phoenix-custom-event-hook: A LiveView hook for publishing and handling custom events. This contains a hook that will let you handle custom events in LiveView. It is used by GitHub - launchscout/live_elements: A library to make custom elements and LiveView so happy together which is library to make custom elements work as LiveView components in a hopefully developer friendly way.

I had a PR out to add custom event support to live view itself which would eliminate the need to use the hook library, but sadly I never got anyone to merge it (or tell me why it was not mergable). I was quite sad about this, but in the end I had to give up and move on to other things.

Feel free to hit me up to talk about any of this. Custom events, custom elements, and supporting them better in LiveView is a thing I am pretty into :slight_smile:

1 Like

As for (form-)inputs the best way is imo to implement ElementInternals - Web APIs | MDN to make the webcomponent a proper form enabled component. Those should then work out of the box with the form abstractions of liveview.

Yep agree, thats why I mentioned form association specifically, and also used an example that would not be suitable in a form ^.^


Follow ups then:

  1. would writing a hook to interact with components from a library like this be considered cumbersome? I couldn’t quite judge the vibe whether hooks were intended more as an escape hatch or not? I guess relatedly is there an established pattern or idiom for providing hooks to users: perhaps we could document a drop-in hook for folks to adopt.

  2. is it just the case that these kinds of interaction patterns are not a good fit for LV in general? I strongly believe that this kind of rich ui should run on the client (and so would argue implementing these as live components would be a poor fit) but the result of those interactions seems like a perfectly valid thing to want to integrate into a LV app.

Both of these look excellent, very nice!

JS hooks are certainly more “escape hatch”, if a hook can be provided that will certainly make things easier for people, who use LV because the don’t want to touch JS. Hooks have a documented interface, so a library would provide the hook and people would only need to import the hook in their js and configure LV to use it.

As for the second. To me it’s not really clear what this webcomponent you posted would do to need a custom event. Like I’d expect to use this e.g. like so:

<lustre-ui-menu>
  <lustre-ui-menu-item phx-click="ignore">
    Ignore
  </lustre-ui-menu-item>
  <lustre-ui-menu-item phx-click="block" class="danger">
    Block
  </lustre-ui-menu-item>
  <lustre-ui-menu-item phx-click="report" class="danger">
    Report user profile
  </lustre-ui-menu-item>
</lustre-ui-menu>
1 Like

Good accessible ui needs to handle more than simple mouse navigation, there’s a lot of interaction that you need to handle! A simple menu like this needs to handle:

  • arrow keys for navigation (either up/down or left/right depending on orientation)
  • home/end to jump to the beginning or the end
  • enter or click for selection
  • disabled items need to still be navigable by keyboard (to notify non-sighted users of their existence) but not be activable by click or enter.
  • It’s common to support typeahead for keyboards to more-efficiently navigate the list.
  • it should make sure that important aria attributes like activedescendent are updated properly, which may mean listening to hover or mousemove events to handle multi-modal interaction.

it’s quite a lot! it’s certainly true that one could craft a “good-enough-for-me” experience just with a container and some buttons, but to provide a rich accessible experience takes quite a bit more than that.

From a user’s perspective think about the rich kinds of interaction made available to use through the native html <select> input, and then from an engineering perspective think about how simple it is to consume just a single "input" or "change" event to be notified of the user’s selection.

1 Like

My point is not at all that those shouldn’t exist, but how many of these are relevant for LV. Just like click / input / change are enough for native link / button / input / select I’m wondering why those would not be enough for webcomponents at least for a baseline.

This might not the goal of what you’re building, but I’d love to see a webcomponent library, which makes certain UI ideoms we have like dropdowns more accessible, but doesn’t venture out to invent new ways to interact with those elements. A <button> also emits a click event when I press space event though no mouse was involved. If there’s a search involved why not play into existing JS form abstractions.

2 Likes

Just like click / input / change are enough for native link / button / input / select
…doesn’t venture out to invent new ways to interact with those elements.

Just so we’re clear, native HTML <select> supports all of the things I just listed. We are in agreement that the standards set by native ui widgets should be the baseline, but I think that baseline is much harder to meet than most folks initially think.

Accessibility is about both functionality and semantics: this is why aria-* attributes exist so that we can tell the browser to replace the native semantics of an element with something else. This is important because when we use existing HTML elements, non-sighted users do not get visual cues that the thing they’re interacting with does something different compared to normal.

In the case of our menu made up of buttons, it is a common expectation of the “menu” pattern that it close once an item is selected. We were satisfied that our styled <div> with some buttons in looked like a menu but for a non-sighted user they saw a collection of controls. When they activate one, focus will be taken away from the button and (if you were diligent) moved elsewhere (and if you werent diligent they now have no focus at all).

Fortunately there are ways to communicate these semantics, (in this case role menu on the wrapper, role menuitem on the buttons). Custom elements can report these semantics natively, without need for developers to provide aria-* attributes.


On the question of why this might be of particular relevance for LV users. As i see it the current state of play is mostly one of these options:

  • Be satisfied with native HTML elements and the interfaces you can craft with them, don’t fix what isn’t broke. In lots of cases this is not just good enough but good.

  • Be unsatisfied with what native HTML elements provide and begin building pieces of interactive ui in LV using the interaction patterns LV makes simple (click, keydown, etc) similar to our hypothetical button-based menu.

  • Look to exist client-side rendering options and how you might integrate them with LV, (livesvelte, livevue, i assume theres some react version, …)

If the first option describes you that is totally valid, but is also not the kind of user to ever find a ui component library useful, so perhaps a bit moot to discuss.

For the second option: one of the best things about LV is that it provides an entry for backend devs and engineers in general that would otherwise be put off doing frontend work to actually start crafting interactive experiences on the web. The flipside of that, perhaps, is we might be more willing to make trade-offs in terms of accessibility and user experience because of different priorities, lack of experience, or lack of interest.

And I mean the third option means fully embracing JS and everything that entails - for some that means getting access to the best of both worlds, but I would guess for many of us it means interacting with an ecosystem we would really rather not be and spending time writing code in a language we would really rather prefer not to.

I think a robust custom element library meets LV exactly where it wants to be: rendering HTML, while ideally abstracting all the hard bits away :slight_smile:

5 Likes

In my projects, I’ve started to move away from hooks and started using the following pattern instead:

window.addEventListener("phx-x:register-events", (event) => {
    const target = event.target;
    const onScrollend = target.getAttribute("phx-x-on-scrollend");
    target.addEventListener("scrollend", (e) => {
        liveSocket.execJS(target, onScrollend);
    })
})
<div
  phx-mounted={JS.dispatch("phx-x:register-events")}
  phx-x-on-scrollend={JS.push("scrollend")}
/>

This has the advantage over hooks that you do not need to set an ID, which makes it a lot more reusable.
I think this pattern might also work quite well with custom elements.

I really would advise against using hooks for UI elements, as the ID requirement leads to having to pass down id prefixes everywhere.

Btw, thank you for trying to tackle this hard problem! I’ve spent hours searching for good custom elements headless UI libraries, but I couldn’t find any.

2 Likes

Using hooks doesn’t necessarily mean hooks everywhere. There could be a single hook somewhere in the tree handling all the events of its children.

Have you seen the last update on the PR? Last comment from Steffan was asking some feedback from you. :slight_smile:

Right. Large, singleton like hooks are ok.
But OP asked about UI components like a select element, and for those hooks are not well suited.

Maybe if you’d make a single hook that handles multiple elements it might work. But I haven’t seen this anywhere, do you have an example where you saw/used that?

Just aside with the chance going off topic, there is phx-viewport-bottom so you don’t have to implement it yourself. See Bindings — Phoenix LiveView v1.0.17

I have not! I stopped checking it after a few days and never saw the notification about it I guess. Thanks!

Hey this is certainly an interesting idea, and I can see how it might generalise by just iterating over the target element’s attributes and handling any phx-x-on-*. Am I right that the JS dsl available wouldn’t be enough to extract any data from an event? In the menu example above, a select event would carry with it the value of the item selected, and you’d probably want to know about that!

Yeah, js commands are encoded as an oqaque datastructure, which you’re not supposed to customize client side, but only provide to the various exec apis verbatim.

I do it by modifying the JS command. But as @LostKobrakai said, I probably shouldn’t do that…
Anyway, I think in user code it’s fine, as it has a very simple structure.
So maybe this is “illegal”, but here’s my code:

const addEventDetail = (js, detail) => {
  let decoded;
  try {
    decoded = JSON.parse(js);
  } catch (e) {
    // if not a JS command, we assume it's an event name
    decoded = [["push", { event: js }]];
  }
  const cmd = decoded.map(([kind, args]) => {
    if (kind == "push") {
      return [kind, { ...args, value: { ...(args.value ?? {}), ...detail } }];
    } else {
      return [kind, args];
    }
  });
  return JSON.stringify(cmd);
};

Used like this:

const cmd = addEventDetail(onScrollend, { someData: 42 })
liveSocket.execJS(target, cmd);

But yeah, I wouldn’t use this in a library, as it relies on internals.

1 Like