Is it possible to assign a pipeline (in the Phoenix sense) to Hologram routes?

I’m wondering if it’s possible to assign a pipeline (in the Phoenix sense) to Hologram routes. More specifically: I would like all my Hologram pages – but not necessarily pages Phoenix is still routing – to pass through a bespoke MyApp.Plugs.Auth.

3 Likes

If hologram can work with phoenix auth, it would be fantastic. Since work on sessions has been done, I think there would be a way to get the user authenticated and put the token in the session.
In the init function of the page, we can check if the user token exists in the session and perform required action.
I am not sure, if we need to do the same steps for each command though.
Are there any top level router functions that get called for every route, where we can check for authentication.
@bartblast if you can provide some guidelines for authentication, it would be wonderful. Thanks.

1 Like

If you’re looking to auth every route, you can put the auth plug before the Hologram handler in the endpoint.ex and, as you say, use the session for passing data around. This works well:

  # the plug 
  def call(conn, _opts) do
    case conn.req_headers |> Map.new() |> JWT.verify_header() do
      {:ok, jwt_data} ->
        Plug.Conn.fetch_session(conn)
        |> Plug.Conn.put_session(:email, jwt_data["email"])
      _ ->
        conn
        |> send_resp(403, "not authenticated")
        |> halt()
    end
  end

with this init in the layout:

  def init(_params, component, server) do
    email = get_session(server, :email)
    put_context(component, :email, email)
  end

(Admittedly, Google’s IAP is doing a lot of heavy lifting here and all I have to do is verify the JWT.)

I’m in the position of wanting one page, a health check for K8S, to not require auth. It would be easy enough to carve out a path-based exception but “security” and “exception” don’t always go well together.

2 Likes

Very nice. That works.
But like you said, with pipelines, it would be so much better.
Exceptions would keep on growing and would quickly become difficult to manage.
I don’t know what @bartblast has thought regarding authentication and authorisation.
Getting to know his take would be very helpful.

2 Likes

@bunnylushington, @sreyansjain

This is definitely something that’s been on my mind as the framework evolves.

Current State:

Right now, Hologram doesn’t have Phoenix-style pipelines, but you can achieve similar functionality by placing auth plugs before the Hologram.Router in your endpoint (as @bunnylushington demonstrated). The Hologram.Router is indeed a Plug that can interoperate with Phoenix sessions, so this approach works well for the current use case.

Planned Solution:

I have a setup/3 function planned for the Roadmap (“Setup Lifecycle Hook - Implement the setup (pre-init) lifecycle hook for components and pages”) that will serve as middleware. The idea is to allow you to include specific implementations of the setup function on different pages through the __using__/1 macro. You could then use directives like:

use MyApp.AdminPage  # instead of use Hologram.Page

Or alternatively:

use Hologram.Page, setup: MyModule.my_fun/2

I’m open to other ideas…

The setup/3 function would receive the regular component and server structs, plus lower-level connection information like headers, and could modify both the server and component structs. This would give you the pipeline-like functionality you’re looking for.

Current Workaround:

For now, you can hook into the endpoint module (as shown in the example in the thread) to handle authentication. Since Hologram when running on top of Phoenix uses Hologram.Router which is a router plug, it can interoperate with Phoenix sessions.

Authentication & Authorization Vision:

Eventually, I want Hologram to have a batteries-included approach for both authentication and authorization. You’d have an Auth module provided by the framework that handles both aspects - you could hook into it with custom auth implementations if you don’t want to use the default. But I want there to be a default auth implementation that works out of the box for both authentication (who you are) and authorization (what you can do).

This is particularly important because when I observed the Phoenix ecosystem, authentication was one of the main things that tripped users up. Since Hologram’s approach is so unusual (automatically transpiled Elixir to JS), this creates some non-standard problems we have to think about - the client-side code is in essence public, so we need to be especially careful about how we handle sensitive authentication logic and ensure that authorization checks happen server-side. It’s crucial to provide clear guard rails and sensible defaults to help users get started quickly without getting overwhelmed by choices or accidentally exposing security vulnerabilities.

Third-Party Auth Integration:

The framework will make it easy to use third-party authentication and authorization solutions. Since Hologram’s auth primitives will be designed to be pluggable, you’ll be able to seamlessly integrate with existing auth libraries and services while still benefiting from Hologram’s built-in auth features for UI components and authorization checks.

Commands and Authorization:

Until we have the Auth primitives provided by the framework, the server struct provides access to session (interoperable with Phoenix) and cookies, and you can implement your custom authorization handlers or use some auth library for now.

The setup function is definitely high on the priority list - it should solve the pipeline problem elegantly while maintaining Hologram’s philosophy of keeping things simple and composable.

Let me know what you think…

3 Likes

Thank you so much. This really clears things up.

The setup/3 function would receive the regular component and server structs, plus lower-level connection information like headers, and could modify both the server and component structs. This would give you the pipeline-like functionality you’re looking for.

This would be great.

Great job with Hologram. More power to you.

2 Likes

One unfortunate problem with LiveView being built on Phoenix (and therefore Plug) is that they ended up with two separate pipelines (one for the initial request, and then another for the socket). This complicates the developer experience and creates a performance problem (“double render”).

It would be nice if Hologram could avoid making this mistake, though it won’t be easy as there really are two requests under the hood and the framework needs to make it seem like there’s only one. Also it means you can’t rely on Plug the way Phoenix does because you need to abstract it away into the unified pipeline. So it’s a lot of extra work (which is of course why Phoenix ended up this way).

1 Like

I’m not sure I agree with this being a mistake. As you followed up it’s a given due to how browsers work – they make http requests. Only JS can make websocket requests, but js is gated by the browser first having received some html, which links to the js. And for SEO you always need to be able to serve content over HTML, so it’s not like you could make the html pipeline minimal to just serve the link to the JS and depend on websockets for any actual content. LV already uses live navigation to skip any duplication where it can (once a websocket connection is established).

Also plug isn’t really the problem anymore. socket in phoenix is a separate pipeline due to historical reasons, where cowboy was the only supported webserver and there used to be no abstraction layer for websockets. So phoenix integrated with cowboy directly for websockets. Nowadays with bandit there was an abstraction introduced to handle websockets on a plug pipeline with Websock (see Bare Websockets | Benjamin Milde). This just hasn’t been backported to the socket macro due to some related, but ultimately separate concerns: Phoenix.Router based socket routing by LostKobrakai · Pull Request #6142 · phoenixframework/phoenix · GitHub

1 Like

I was only referring to API design, of course everything in your post is correct.

I think forcing developers to design their LiveViews to perform twice as much work as necessary on initial render is clearly a design mistake. I understand why it worked out this way, and I am definitely not criticizing anyone. But I don’t see how you could argue this is a good design. Since Hologram is a clean slate I am just suggesting that @bartblast avoid digging himself into the same hole.

This is interesting, but to be clear I am referring to the divergent pipelines in the Phoenix and LiveView APIs. Essentially all of this seems wrong to me.

I’m not sure a unified Plug abstraction would solve this problem because the navigation-over-socket which the live_session abstraction is used for is happening in Phoenix, right? I would think a Plug socket pipeline abstraction would only affect the initial socket connection, not the messages flowing over it (which are actually navigation but how would it know).

Not to mention solving the double render requires keeping a LiveView process alive for N seconds after the HTTP request in case the socket comes in. I don’t know these internals in detail but something tells me this would be harder than it sounds (or it would have been done by now). I can think of several other problems that would come up (and I’m sure you can think of even more).

This is just one of those cases where coupling your API to a lower-level abstraction like this creates trouble down the road. I know there was a thread about whether Hologram should be an “independent” framework so I had that in mind as well. I am not necessarily suggesting Hologram shouldn’t use Plug/Phoenix, by the way, just that it may be better not to expose them to end-users in the way that Phoenix exposed Plug.

1 Like

Sincere question: what would the alternative be?

Technically you aren’t forced in all scenarios. You can skip if for admin pages. In many cases you can skip it for your entire app if it’s all behind auth and so long as everything is navigating over live sessions it’s only happening once (I know you know all this).

I would love a nicer API skipping it as opposed to having to check connected?(socket) all the time and being forced to provide explicit skeleton renders. Perhaps this is what you are talking about. A connected/3 callback would be really nice.

1 Like

You can’t skip loading on the initial render unless you are okay with that content a) popping in and b) not being present in the dead render (bad for SEO and bots and HN readers who claim to browse with JS disabled and so on). The problem with the pop-in is that it obviates one of the best “advantages” of LiveView, which is on-by-default SSR with no extra work.

Likewise you can’t skip loading on the live render because you need the assigns to actually run the LiveView.

The actual solution is that the assigns from the dead render need to carry on to the live render. One thing you could do is cache your queries so that the live render queries hit the cache. You could go further and cache the assigns specifically somehow.

But what should really be happening is that there should only ever be one LiveView process, with one set of assigns, which survives the dead render, and then the live render should be routed back to it when the socket is opened. A lot of things would have to be done to make that work and I’m not trying to make the claim that it would be easy, but from the API side I think it’s obviously the best design.

Async assigns are that API and are a fantastic solution to this exact problem. I really like async assigns, it’s very clear that a lot of care was put into handling the annoying edge cases properly. Honestly they might be the best-designed API in LiveView. Probably because they seem to have come directly from Chris dogfooding LiveView at Flyio.

But they don’t solve the double render, you would get pop-in.

2 Likes

Yes I’m conflating render with just loading data. For simply not loading data on the dead render I find connected? simpler and leave async assigns and prefer to relegate that to longer running requests. YMMV, of course.

1 Like