Hi! I was wondering what patterns people have used for the following scenario.
- When the page first loads, spawn some (potentially longer-running, e.g., fetching data from an external API) task.
- When that task finishes, update the UI.
What I have after a couple iterations is something like this:
# do this in handle_params because I want live_patch to trigger it
def handle_params(params, _uri, socket) do
if connected?(socket) do
view_pid = self()
spawn(fn ->
result = make_slow_api_call(params)
send(view_pid, {:api_call_done, result})
end)
end
{:noreply, socket}
end
def handle_info({:api_call_done, result}, socket) do
{:noreply, assign(socket, api_result: result)}
end
defp make_slow_api_call(...) do
# ...
end
This works OK, however there are a couple shortcomings that I would love to handle better:
- It’s hard to test for multiple reasons.
-
make_slow_api_call
is private, so the only way to trigger it is to render the view.
- Whenever I render the page, I need to wait for
make_slow_api_call
to finish, else I have stray processes after the test finishes. Doing this is not trivial and is error prone.
- Handling errors is complicated - I could
spawn_link
but if there’s any error in the task then the view reloads, which is ok sometimes but not if the error is due to, say, an upstream service being down, which would lead to an infinite loop of reloading. I can monitor the process, it just feels clunky to need to do that on my own.
I’m curious if anyone has come across a better pattern for this, or maybe there’s something obvious I have just missed.
1 Like
For long running tasks that are entirely under my control, I will do the same thing. However, for external API calls that usually has some sort of rate limiting, I will just use a GenServer for each API endpoint. If you really need parallel access then you can use a NimblePool.
I think the thing here is less about managing access to the remote server and more about how to not block your liveview processes whether you’re directly pinging the remote endpoint or whether it goes through a genserver.
@dantswain this is a good question! We’ve done basically the same thing (I think we use a task but same idea) and it definitely is a challenge for testing. I think our solution involved setting an assign which controlled whether it was done sync or async, and then in the tests we set that assign to sync. Definitely an area that could use some slightly more ergonomic patterns though!
1 Like
Sometimes I don’t need true async operation. I only need to send the rendering to the client side fast and hide my long task in the gap before the first user interaction coming back, so everything is still deterministic. It should also be easier to test or for error handling.
Thanks for the thoughtful replies so far! Our use case is actually spread out over a couple time scales, from sub-second to maybe 10 seconds depending on the service. This is a backend “dashboard” style app that collects information from a few different places - some of them are quick and some aren’t. The upshot is we definitely have a case for doing some of these things asynchronously I think.
I’m starting to lean towards introducing a mediator layer to handle these things and then trying to keep the “business” logic (actually calling out to the external service) on the opposite side of that layer than the view. It feels heavy-handed but I haven’t been able to come up with something more elegant (that’s why I asked here!). It’s going to be a little tricky because I don’t want that mediator layer to be a single process, which would become a bottleneck, so I’ll probably have an ETS table in front of it.
As for testing, what I would do it to refactor make_slow_api_call
into another module, like App.PrivateAPIs
, and have a config in my application like config :my_app, private_api: App.PrivateAPIs
and have something like @private_api Application.get_env(:my_app, :private_api)
in my LiveView.
If you want to make it more “standard”, you can use adapter pattern.
This way, you can test your private api individually, and you can also write a mock module for when you want to test other things.
Thanks for the good suggestion @vsoraki. We generally do use a similar pattern to that to mock out our api calls. The thing that makes this difficult to test is that the call (mocked or not) is being done in a process that is spawned from the view. So to test it, we need to know when that separate process has finished, sent a message back to the view process, and the view process has consumed that message.
The simplest way to do that is to repeatedly render the view and check for the effect that you expect, but that can be slow and fail in unexpected ways. What we’ve been doing is actually using the API mock (using Tesla) to capture the pid of the spawned process and relay it back to the test process, then have the test process monitor the spawned process and assert_receive
that we get the :DOWN
message for the spawned pid. This works ok most of the time but is still sometimes flakey (false negatives) and it requires a bit of overhead in terms of set up.
It looks something like this
test "fetches data from API and renders results", %{conn: conn} do
test_pid = self()
tag = :api_call # tag because we may have multiple APIs to call
Tesla.Mock.mock(fn ->
send(test_pid, {tag, self()})
{:ok, "Hello from the API"}
end)
{:ok, view, _initial_html} = live(conn, "/")
assert_receive {:api_call, spawned_pid}
ref = Process.monitor(spawned_pid)
assert_receive {:DOWN, ^ref, :process, _pid, _reason}, 1_000
assert render(view) =~ "Hello from the API"
end
1 Like
I just wanted to post this shorter ElixirConf talk and accompanying github repo this for anyone else landing here looking for more recent information on this topic because I found it very useful.