Is the Raspberry Pi Zero W powerful enough for Nerves + Membrane + video streaming


as the title states, I would like to know if the Raspberry Pi Zero W is powerful enough to run the Membrane framework on Nerves and use it to stream the video (720p, 24fps) from the Raspberry Pi camera module to an online service. I also want to use Phoenix for a web control panel.

The specs of the RPi Zero W are not that great but maybe it is enough.

  • 1GHz, single-core CPU
  • 512MB RAM

My alternative would be the Raspberry Pi 3 Model B. One important factor is energy consumption and the RPi Zero W uses much less power than the RPi 3B.

1 Like

The Rpi Zero W is absolutely powerful enough for Nerves :slight_smile:

As for video streaming, in my experiments I had to limit the size of the video to 640x480 to keep latency low and the video fluid, but I was using WebRTC and running a full (headless) Chromium browser, so not the most efficient option out there…

I suspect that with a more efficient setup you could achieve better results. I never tried Membrane, it sounds great.


According to the camera docs the encoding happens on the Raspberry Pi’s GPU, not on the camera module, so you need to ensure that the GPU is sufficent.

According to the specs published on wikipedia RPi Zero W has built-in 1080p30 or 1080p60 encoder (depending on the SoC).

We as the Membrane Team have never tested the RPi Zero W but it seems that it should work.


Hey, that sounds like my idea can be done using the Zero W.

I just briefly looked at the membrane docs a while back. My first road block will be to stream the camera video in a specific resolution and framerate to an rtmp:// url, (e.g. as a Youtube livestream). Can you point me in the right direction for that? I’ve never done any video processing/streaming within code yet.

Later on, I would like to be able to start/stop the stream from within a Phoenix control panel, adjust settings (stream url, resolution, fps, etc.) and maybe add some overlay texts (if possible). But I need to get the basics done first.

1 Like

Neither RTMP nor overlays are ready yet, however they are on the roadmap.

There are some C libraries you can relatively easily wrap by yourself via NIF if you need that sooner than we’re going to implement it.

We do not have direct support for the RPi camera module, too.

In current shape you won’t be able to use Membrane as is and just assemble the puzzles.

It seems what you need to build require writing some Membrane elements first and that requires some understanding of how multimedia work. We’ll be happy to guide you, feel free to reach out the Membrane devs on our Discord channel.


Thanks for the reply!

Is there a rough estimate as to when it will/could be implemented?

I think the easiest way for now would be to use ffmpeg and call out directly to its binary via a Port or something like that.

Is the RPi camera module special in any way so it would require dedicated features?

As multimedia from within code is something I’ve never done before, I am not sure if I could build those membrane elements. And I don’t want to waste the time of the membrane devs to help me all the way through it.

I can’t say for sure when RTMP will be implemented.

Membrane is project-driven to avoid implementing unnecessary or badly designed stuff so we prioritise features required by the projects that we develop as Software Mansion (a company behind Membrane).

We do have a project that requires RTMP in the backlog that is supposed to start very soon, but it is not 100% confirmed yet.

You can rely on ffmpeg for RTMP, and it’s a common practice to fill the gaps in membrane by doing so. Even we do this in some projects :wink: and later we replace ffmpeg parts with Membrane elements once they’re ready.

RPi camera module has quite specific programming interface. In order to call it from Elixir directly we would have needed to port to Elixir.

Unless you strive for 100%-Elixir code or need to process the streams in sophisticated manner it might be easier to just call command-line raspivid that ships with the camera, then pick up the H.264 file that is being recorded and pass it to the ffmpeg.

It is a bit rough approach and won’t allow to do any advanced processing but should do the basic job and be good enough to validate the idea. Hopefully in the meantime we release the missing elements.


I think I will just combined raspivid and ffmpeg for now as shown here and replace it as soon as Membrane supports it.

If you need someone to test a future integration with the RPi camera module, you can always ping me.