Would Phoenix LiveView perform well enough for a financial application?

Is Phoenix LiveView performant enough for a live view application? Let’s say I would like to be able to have 120hz, so every frame should be rendered at maximum 8.3ms.

I already have the data feed part done in Rust so I could simply use it in Elixir with a Nif. My idea is to create a web based DOM (Depth of market), an example can be found here: Ultra Bond UBM1 Trade 7 Ticks $3300 Winner Using Jigsaw DayTradr - Trade Process and Tips - YouTube

It’s pretty much a big table that is updated quickly.

3 Likes

Can you elaborate on the connection between “financial application” and “8.3ms render time”?

2 Likes

For a trading UI, it’s much more important that it stays responsive when market conditions are volatile. Depth data is typically most dynamic around the bid/ask with less competitive orders more static. But in volatile conditions, the whole depth can change dramatically and this is precisely the time the UI should stay responsive.

In general you want to amortise visual depth changes at a lower frequency, say 2-5Hz, with maybe some affordance for indicators of trading activity at a slightly higher frequency. This amortisation often involves buffering data early, close to the source, discarding out of date changes and possibly sending diffs.

You possibly could get 120Hz but you’d risk leaving your clients and server with insufficient headroom.

4 Likes

I’m curious about the render time requirements too …don’t forget most browsers cap frame rate.

1 Like

Without external bottleneck, Liveview itself is perfectly able to reach 120Hz.

I made a little pubsub based Proof of Concept mulitplayer game with Liveview where people can move their character on a multiplayer map (rendered in an HTML canvas). It easily reaches 240Hz, 480Hz is not perfectly stable (and in my case of bidirectional communication, the network latency may become an enemy).

I suppose that everything depends on how fast your render / diff is. Maybe a really huge table could slow it down enough to fail to reach 120Hz.

4 Likes

The depth of market, DOM, is fast. When you see financial charts with candlesticks those charts usually are aggregated by a timeframe, like 1minute or 10minutes. The DOM is whats actually happening at the market. We can make the DOM be really realtime, update on each new message that it receives, but them this would be so resource intensive, the other approach is to provide a little aggregation in the millisecond range, which is more reasonable as a human will consume that, it makes no sense to update in the nanosecond range if you simply can’t react to that kind of speed.

The 8.3ms render time is because I would like to have 120fps at the application. I don’t know if this is even possible on a browser to be honest or how resource intensive it is. The server would be running locally at the trader computer and the interface would be a web application.

3 Likes

Yes, this is one of my questions, but I think most modern browsers can safely reach 120hz, 60hz for sure.

Yeah, this is my main question. I think there is no better way to answer this then implementing something to see for real. The external bottleneck will be minimal, the rust layer is really fast and everything is happening in microseconds, after that it’s all about the rendering with live view.

You might want to look into that, I’m fairly sure I read recently that they’re capped at 50fps…

If your chart is gonna be in JS, I’m not even sure that live view rendering is the bottleneck here anyway. You’d just push_event new chart data from the live view up to the JS and then it’s gonna render client side.

Pushing Back

This is probably just my ignorance of high-frequency trading, but after reading all of the above I still struggle to understand why such a high frame rate matters for a financial application.

If a human will gain a competitive advantage by seeing the market data milliseconds after it happens and then being able to react in milliseconds, I worry that they will be at a competitive disadvantage to a computer program that was set to execute the trade when a condition was met. So I don’t understand how anything over 60fps will matter.

If I am wrong, and fps and milliseconds do matter. I worry that the person using the web UI will be at a competitive disadvantage over someone using some kind of native UI.

And everyone will be at a competitive disadvantage to the person connected to the same ISP and down the street from the data source.

Trying to help

All that said, if your data source is sending too many events per second, you can definitely rate limit how often you update the browser.

Here is an example of where I did exactly that on a side project. lib/quick_average/room_coordinator.ex · main · De Wet / quickaverage · GitLab

If we had a 1000 events come in, the LiveView process would get backed up trying to update the client 1000 times. Or more often, the JS would get backed up trying to process and render the tidal wave of messages from the LiveView process. That code example is a GenServer that keeps track of the state that the LiveView would normally keep track of, and then it only sends the state to the LiveView X times per second.

Elixir gives you a plethora of rate limiting and data processing plans to choose from. One fun one that comes to mind is that many different Elixir processes could each track a different piece of data and write it to ETS and then the LiveView just reads it from ETS X times per second.

4 Likes

uPlot can be rather fast even with high volume data, as can be seen in the phoenix dashboard. Though it’s true, many js chart libraries would probably become the bottleneck by itself.

1 Like