Nerves device as an IP Camera?

I recently purchased one of these from Amazon. It says it does not store any video footage on their servers, but how do we know it doesn’t? So it got me thinking - what’s the viability of using Nerves to create a simple IP Cam?

Some things that may beed to be taken into account…

  • What is the Nerves boot up time (my IP cam is always off, except when I need to pop out (i.e the times I want to keep an eye on the house :lol:) so I only switch it on when needed, and that’s what I’d want to do here).
  • Can you configure Nerves to run an app on boot easily? (So switch on this device, it boots up, loads our app and starts broadcasting or sending video or intermittent images (like a web cam) to an FTP server)

Optional nice to haves would be…

  • Motorised camera, so you can look around! (I can do that on the IP Cam I’ve bought - tho I don’t use it, the initial starting position is fine)

@ConnorRigby created TurtleTube! so most of what is needed has already been done :smiley:

Connor, you could create and sell this! Lots of comments in those IP Cams worried about their videos being hacked/stored on somebody else’s servers - if they know the code is open source and easily inspectable, it could be an attractive alternative :003:


This depends on a few things

the device chosen

Raspberry Pi 3 B+ for example boots about 4 times faster than Raspberry Pi 0 because it has 4 more cores.

The code written

Because Nerves makes such heavy use of GenServers compared to a stateless web app, many GenServers will have an affect on application boot time. Heres an example. (PS i omitted the GenServer fluff functions for clarity)

This code initializes some thing (say the camera?) in the init/1 GenServer callback. This will block booting the next server in the supervisor.

def init(args) do
   inital_value = SomeNameSpace.resource_intensive_initialization()
   {:ok, %{value: initial_value}}

This code initializes state data without actually doing resource intensive things.

def init(args) do
   # Send this module a `timeout` message in 0 ms
   {:ok, %{value: nil, initialized: false}, 0}

# Called when returning an integer as the last element in the `init/1` tuple.
def handle_info(:timeout, %{value: nil, initialized: false} = state) do
   inital_value = SomeNameSpace.resource_intensive_initialization()
   {:ok, %{state | initialized: true, value: initial_value}}

Obviously you will have to account for this sort of thing in other GenServers in the system. When building Nerves applications, i find a firm understanding of OTP principles really helps.


Every dependency you add has the possibility of blocking the boot time. This is unfortunate, but reality. Do you really need that library for adding numbers together? maybe You can spin that yourself?

I feel as if i somewhat answered this in the first section, but for the most part, one should think of Nerves as a fairly standard OTP release. It is a normal Elixir application. If you just do mix hello_nerves, there are no runtime nerves dependencies. On RPI3 i find a basic application will come up in about 10 seconds. That said there are things you won’t have any control over:

  • Network connection time - the time it takes from boot to getting connected to the internet/network
  • Network latency - the time it takes to actually get your picture data from point a to point b
  • Hardware device initialization (camera again as a concrete example)

I want to emphasize that one should not do what i did with TurtleTube for anything you care about. I’ll briefly summarize the hacks employed in this short project:

“Video” streaming

“Video” is a facade. What’s really happening is an image is being captured as fast as possible and dispatched to the server.

Transport mechanism

I literally just Base.encode64!(jpeg_data_from_camera) and sent that over a Phoenix Channel. On the
client (javascript) side, i’m just replacing a <img> tag with the contents of that image. This will not scale, and you can really see the lag if i’m say uploading new firmware (a relatively resource intensive task)



Now this all isn’t to say Nerves isn’t the right tool for this job, but i would not feel comfortable ever selling it haha.

Final thoughts

A motorized camera was mentioned. One could use a simple “servo” type stepper motor to do this easily. This can fairly simple be controled via an Arduino, or even by the Nerves device’s GPIO.

Another though i had for boot time/network speed is that the newest Raspberry Pi 3 B+ supports “Power over Ethernet” meaning power and networking all in the same cable. I believe this also opens the door for sleep/hibernate, which is essentially no power consumption while also still being “on” meaning you won’t have to reinitialized every boot. I don’t know a ton about this, but it’s certainly something to keep in mind.

Disk space is another concern that came to mind. What happens while offline? still capture and buffer locally? SD cards are not particularly well suited for large video writing. Single images are usually fine though. This adds another bit of complexity however.

Anyway i like the idea and would be interested to see what others have to say


Thanks for the very in-depth reply Connor :smiley:

That’s great imo - it must take about that for my current camera anyway, and I don’t think the time it takes to turn on is going to be an issue for people who want this sort of (more private) camera.

That is interesting too, though again, I think 10 seconds (even up to a minute) would probably be fine. As long as it was relatively stable, where you turn it on and by the time you have left the house it is working.

I’m not personally bothered by offline recording, chances are if someone did break in they would destroy or steal the camera anyway.

Re scaling, perhaps the Nerves app could be configured to take an image at different intervals - so if you’re only going to be out for a few hours there could be more FPS, but if you were going on holiday, maybe 1 image every 30 to 60 seconds.

The Nerves app could also handle when to delete online copies, ensuring you don’t run out of space in your hosting (though obviously for us we could set up a cron to handle that).

I can actually see a whole community building around this sort of thing - when I looked at IP Cams a lot of people were grumbling that most are now cheaply manufactured devices which rely on the ‘cloud’ /servers abroad (so may not have the same kind of privacy laws).

Shameless self-promotion: I have an upcoming training course at Lone Star ElixirConf where we build an IP camera with a Raspberry Pi Zero, which can stream video and scan barcodes. It’s all controlled via a GraphQL API (mostly just to show how to do so).

The TL;DR for those who can’t make it is to check out the Picam library:

I’ve been working on adding some more-advanced features to that library, but it already supports quite a few useful things.


Great topic as there is no way I would trust any third party with such data, so if I ever buy cameras I’d like them to be able to function in my home network.

I think boot up time is not that important as you won’t be away in a second. Even a one minute boot would be okay.

@ConnorRigby why send the images in base64? Are the raspberry pis too slow to perform at least some compression before sending the data on the wire? For a home network the bandwidth is not a big problem and a bigger computer could receive the images and encode them in a video, but it’s still kind of miserable as most of the time the image won’t change at all. At the very least the PI should send a new image with a timestamp only when the difference with the last one is more than a given threshold, and ideally it would do that only when that diff is big enough that it could be a human and thus potential thief/threat (who cares if a bird or a cat goes in front of the camera for a while?).

This diff algorithm optimised for threat detection is really the killer service I would expect from such libraries :slight_smile:

Oh, and it’s always good to keep some offline data, at least the last couple of hours (that won’t take much space with the aforementioned algorithm). The raspberry pi would be enclosed in the wall so people could destroy the camera but not the brains.

1 Like

Because on the device its a one liner to encode as base64 and on the client its a one liner to decode as base64, and Phoenix Channels don’t support binary data easily. I built that project in a matter of about 45 minutes start to finish more as a joke than anything. I never expected it to preform well, last long, scale or anything like that.


That’s awesome Greg! Good luck with the course :023: Perhaps you could do a blog post or something on the topic afterwards? I think a lot of people might be interested in this - not just in this community but the wider IP Cam community.

The Picam library looks awesome :smiley:

I would wall it off in LAN and record its RTSP stream. Connect IP cam to Pi via USB-ethernet adapter, connect Pi to the internet via on-board ethernet port, run an elixir app to proxy&record rtsp stream. Right now I’m basically building a much more feature complete version of this.

1 Like

@AstonJ, did you get anywhere with this? I want to do a similar thing.

Maybe you or somebody else would like to cooperate on something?

1 Like

I didn’t sorry @kanonk

However have you seen the library by @GregMefford?

@ConnorRigby might have some pointers too since he created TurtleTube :smiley:

1 Like

This is a really basic Cowboy 1.x endpoint to create an mjpeg stream from picam,

defmodule Interface.UI.Streamer do
  @boundary "w58EW1cEpjzydSCq"

  def init({:tcp, :http}, req, _opts) do
    headers = [
      {"cache-control", "no-cache, private"},
      {"content-type", "multipart/x-mixed-replace; boundary=#{@boundary}"},
      {"pragma", "no-cache"},
      {"Access-Control-Allow-Origin", "*"},
    {ok, req2} = :cowboy_req.chunked_reply(200, headers, req)
    Process.send_after(self(), :handle, 0)
    {:loop, req2, %{}}

  def info(:handle, req, state), do: send_image(req)

  def send_image(req) do
    jpg = Hardware.Camera.image()
    size = byte_size(jpg)
    header = "------#{@boundary}\r\nContent-Type: image/jpeg\r\nContent-length: #{size}\r\n\r\n"
    footer = "\r\n"
    :cowboy_req.chunk(header, req)
    :cowboy_req.chunk(jpg, req)
    :cowboy_req.chunk(footer, req)


Picam has been abstracted here to allow for testing and not having the camera available on certain hardware, but you could replace Hardware.Camera.image() with Picam.next_frame()


Thanks, @AstonJ and @entone. I have something to work with :slight_smile:

1 Like

What I’m building is unrelated to Nerves, I’m pulling RTSP H264/265 video stream and repackage to deliver over websocket and the web page uses WASM to decode video and render on canvas.


It might be difficult to follow what’s going on without the accompanying training, but this is the project repo for the training course I mentioned above:

I was trying to keep it a bit under wraps so as not to spoil the course, but at this point I think I’m about done doing that training unless someone really asks nicely for me to do it again, because it’s a lot of work to prepare for it. :sweat_smile:


Does anyone know about a more recent project or library to build a video IP Camera with Elixir?

1 Like

I’ve been meaning to find a good language/framework and after evaluating performances of other people’s project in Elixir/Cowboy/phoenix I wanted to make a Cloud NVR and a DJ Mix/production system, I have over the years wrote it other languages, but I strangly notice some C based parallels looking at some of the syntax, but for video, I want to try the <VIDEO> html tag, but use an OS logical loopback device buffer and a dummy.mp4 file, but there are other options to.

1 Like

Do you have links for those projects you have evaluated?

The Picam project still works perfectly, FWIW.

Evaluating erlang, I went through trail and error benchmarking the various erlang projects.

The progam that inspired me to explore writing a cloud NVR in Elixir/Erlang is a php/c++ project called zoneminder zoneminder/web/ajax at master · ZoneMinder/zoneminder · GitHub You see, the drawbacks of them choosing c++ & LAMP server is that all the exploits and limitation of that system are applicable. Besides the obvious lack of sanitation of their $_POST objects in their PHP code.