Phoenix Sockets, what could be the best approach

So your approach could definitely work and is perfectly fine for small scale but you’re essentially serving video, which could also be done with serving a video stream. The pros about serving an actual video stream is that you can transform the 1-2MB per jpeg (which could roughly sum up to 10-30MB per sec if you want to serve realtime video of 10-15 fps) to a couple of bytes per second as you’re only sending the changes between frames. (There was a cool talk about how to achieve this which you can find here https://www.youtube.com/watch?v=eNe5dmRP9Cc ). The TL;DR is to use ffmpeg which will accept a stream of bytes to convert to a video stream in realtime which you would serve with something like ffplay. I guess a nice headache to actually get to work for now.

The other streaming/chunking a http request suggestion above is also not that far off the mark and maybe even more preferable because it’s a proven code path and a lot more lightweight. See here an plug which you should be able to use https://github.com/elixir-vision/picam/blob/master/examples/picam_http/lib/picam_http/streamer.ex
and a thread about it within this forum: Nerves device as an IP Camera?
Idea here is that there is a special content type for this kind of ‘animated jpeg’ that the browser understands, you just create an img and point it at a url which serves a stream of jpegs through ‘multipart/x-mixed-replace’.
The browser will keep the connection open and wait for new image frames (in the form of jpg images) that will be served by your plug/server.

Keep in mind that if this existing code works for you, it’s fine! But the last described method could be pretty easy to implement. Good luck!

1 Like