Invoke Phoenix endpoint programmatically at runtime

I’ve got a Phoenix application we’ve been running for some time. We now need to add AMQP as a message source. The AMQP messages will generally match the Phoenix routes in functionality. But we have substantial business logic in the Phoenix pipelines and controllers that we can’t easily refactor at this time.

In Phoenix app test code, we can use Phoenix.ConnTest to build up conn structs and dispatch calls to our Phoenix endpoints without actually going through TCP/HTTP.

Is it possible to use Phoenix.ConnTest at deployed runtime from our AMQP message subscriber, create a matching HTTP request in the form of a conn struct, and then use ConnTest.dispatch/4 or /5 to invoke our application endpoint so that the HTTP pipelines and business logic can be executed? Then extract the response from the returned conn struct and build a reply AMQP message?

1 Like

Sort of answered my own question. I created a module as seen below that uses build_conn() to generate a basic conn struct and then passes it into the get() macro with the “/” route. When invoking it manually at runtime, the endpoint is hit successfully and the result is a conn mutated by the route logic.

defmodule TryConnTest do
    @endpoint TestControllerCallsWeb.Endpoint
    import Phoenix.ConnTest

    def go do
        |> get("/")

Presumably, I could manipulate the conn struct, adding HTTP headers, constructing a post body if necessary, etc… and invoke the Phoenix endpoints internally from an AMQP subscriber in the same application.

My question then becomes … is there a downside to doing this in production until we get round to refactoring the business logic so that it can be invoked from both AMQP and the Phoenix controllers?


The biggest downside that comes to mind: your management decides that readable code isn’t worth the effort again (they already decided that once to get you here) and the refactoring never happens.

Other, more politically-realistic downsides you could cite:

  • authentication / authorization is going to complicate things, as will things like CSRF tokens. Code CAN be written to fake all of these and/or short-circuit them for “faked” requests, but it adds complexity - and if not done carefully, can accidentally introduce security bugs

  • you’re likely to need wrapper code to deal with turning typed AMQP data back into the parameter shapes your controllers expect, and then MORE wrapper code to transform the output back from a stream of bytes into typed data. It’s not going to be complicated code, but it’s going to need maintenance every time the controllers etc change


Many thanks for the thoughtful comments, al2o3cr.

My team agrees with each one of your points. The root of the problem is that we already have two parallel implementations of the service; one that does HTTP only and is highly scalable, multi-tenant and with API keys and advanced granular authentication (elixir, kubernetes, cloud-based) and another that does AMQP and HTTP, but is single-tenant only, with scalability limitations (C#, desktop-server-based).

That so much business logic is in the HTTP routes has mostly to do with the fact that the team (me included) were new at elixir and Phoenix when we built it (you mean Phoenix wasn’t the app?)

Now in our desktop server product we want to address scalability and add granular authorization and multiple isolated dataspaces on a per-user or user group basis (aka multi-tenancy).

So our best bet would seem to be to instead add AMQP to our elixir application, keeping all its granular auth, separate data spaces and other goodness, and scale it down from kubernetes to an elixir release deployment on a desktop. We would completely deprecate our desktop-only version. But that means we need AMQP. Sigh…

Your point about CSRF is a really good one, and the HTTP logic DOES definitely deal with that. We’ll have to look into how to deal with that when the request comes from AMQP. MANY THANKS for pointing that out!

We’re already looking into how to address security from the AMQP side, because those messages don’t currently carry the API key (or login session key). Further, AMQP is nowhere near as secure in this environment because anyone connecting to the bus can snoop everything.

At least we share most of the data structures in terms of data shapes between the two implementations, since they share a single HTTP API. There’s not 100% congruence between AMQP data and HTTP data, but they’re very close. The translations will be minimum and already generally understood. It helps that we won’t be changing the controllers much because of REST data contracts. We’re more likely to add new routes than change an existing one.

Our preliminary testing doesn’t indicate any significant technical downsides. We’re really looking for some downsides before we commit to management that this approach will more than likely meet schedule requirements.

al2o3cr, again, many thanks for your thoughtful reply!

1 Like