Nerves device as an IP Camera?

nerves
hardware
video-streaming

#1

I recently purchased one of these from Amazon. It says it does not store any video footage on their servers, but how do we know it doesn’t? So it got me thinking - what’s the viability of using Nerves to create a simple IP Cam?

Some things that may beed to be taken into account…

  • What is the Nerves boot up time (my IP cam is always off, except when I need to pop out (i.e the times I want to keep an eye on the house :lol:) so I only switch it on when needed, and that’s what I’d want to do here).
  • Can you configure Nerves to run an app on boot easily? (So switch on this device, it boots up, loads our app and starts broadcasting or sending video or intermittent images (like a web cam) to an FTP server)

Optional nice to haves would be…

  • Motorised camera, so you can look around! (I can do that on the IP Cam I’ve bought - tho I don’t use it, the initial starting position is fine)

@ConnorRigby created TurtleTube! so most of what is needed has already been done :smiley:

Connor, you could create and sell this! Lots of comments in those IP Cams worried about their videos being hacked/stored on somebody else’s servers - if they know the code is open source and easily inspectable, it could be an attractive alternative :003:


#2

This depends on a few things

the device chosen

Raspberry Pi 3 B+ for example boots about 4 times faster than Raspberry Pi 0 because it has 4 more cores.

The code written

Because Nerves makes such heavy use of GenServers compared to a stateless web app, many GenServers will have an affect on application boot time. Heres an example. (PS i omitted the GenServer fluff functions for clarity)

This code initializes some thing (say the camera?) in the init/1 GenServer callback. This will block booting the next server in the supervisor.

def init(args) do
   inital_value = SomeNameSpace.resource_intensive_initialization()
   {:ok, %{value: initial_value}}
end

This code initializes state data without actually doing resource intensive things.

def init(args) do
   # Send this module a `timeout` message in 0 ms
   {:ok, %{value: nil, initialized: false}, 0}
end 

# Called when returning an integer as the last element in the `init/1` tuple.
def handle_info(:timeout, %{value: nil, initialized: false} = state) do
   inital_value = SomeNameSpace.resource_intensive_initialization()
   {:ok, %{state | initialized: true, value: initial_value}}
end

Obviously you will have to account for this sort of thing in other GenServers in the system. When building Nerves applications, i find a firm understanding of OTP principles really helps.

Dependencies

Every dependency you add has the possibility of blocking the boot time. This is unfortunate, but reality. Do you really need that library for adding numbers together? maybe You can spin that yourself?

I feel as if i somewhat answered this in the first section, but for the most part, one should think of Nerves as a fairly standard OTP release. It is a normal Elixir application. If you just do mix nerves.new hello_nerves, there are no runtime nerves dependencies. On RPI3 i find a basic application will come up in about 10 seconds. That said there are things you won’t have any control over:

  • Network connection time - the time it takes from boot to getting connected to the internet/network
  • Network latency - the time it takes to actually get your picture data from point a to point b
  • Hardware device initialization (camera again as a concrete example)

I want to emphasize that one should not do what i did with TurtleTube for anything you care about. I’ll briefly summarize the hacks employed in this short project:

“Video” streaming

“Video” is a facade. What’s really happening is an image is being captured as fast as possible and dispatched to the server.

Transport mechanism

I literally just Base.encode64!(jpeg_data_from_camera) and sent that over a Phoenix Channel. On the
client (javascript) side, i’m just replacing a <img> tag with the contents of that image. This will not scale, and you can really see the lag if i’m say uploading new firmware (a relatively resource intensive task)

Security

what?

Now this all isn’t to say Nerves isn’t the right tool for this job, but i would not feel comfortable ever selling it haha.

Final thoughts

A motorized camera was mentioned. One could use a simple “servo” type stepper motor to do this easily. This can fairly simple be controled via an Arduino, or even by the Nerves device’s GPIO.

Another though i had for boot time/network speed is that the newest Raspberry Pi 3 B+ supports “Power over Ethernet” meaning power and networking all in the same cable. I believe this also opens the door for sleep/hibernate, which is essentially no power consumption while also still being “on” meaning you won’t have to reinitialized every boot. I don’t know a ton about this, but it’s certainly something to keep in mind.

Disk space is another concern that came to mind. What happens while offline? still capture and buffer locally? SD cards are not particularly well suited for large video writing. Single images are usually fine though. This adds another bit of complexity however.

Anyway i like the idea and would be interested to see what others have to say


#3

Thanks for the very in-depth reply Connor :smiley:

That’s great imo - it must take about that for my current camera anyway, and I don’t think the time it takes to turn on is going to be an issue for people who want this sort of (more private) camera.

That is interesting too, though again, I think 10 seconds (even up to a minute) would probably be fine. As long as it was relatively stable, where you turn it on and by the time you have left the house it is working.

I’m not personally bothered by offline recording, chances are if someone did break in they would destroy or steal the camera anyway.

Re scaling, perhaps the Nerves app could be configured to take an image at different intervals - so if you’re only going to be out for a few hours there could be more FPS, but if you were going on holiday, maybe 1 image every 30 to 60 seconds.

The Nerves app could also handle when to delete online copies, ensuring you don’t run out of space in your hosting (though obviously for us we could set up a cron to handle that).

I can actually see a whole community building around this sort of thing - when I looked at IP Cams a lot of people were grumbling that most are now cheaply manufactured devices which rely on the ‘cloud’ /servers abroad (so may not have the same kind of privacy laws).


#4

Shameless self-promotion: I have an upcoming training course at Lone Star ElixirConf where we build an IP camera with a Raspberry Pi Zero, which can stream video and scan barcodes. It’s all controlled via a GraphQL API (mostly just to show how to do so). https://lonestarelixir.com/2019/trainers/1#greg-mefford

The TL;DR for those who can’t make it is to check out the Picam library: https://github.com/electricshaman/picam

I’ve been working on adding some more-advanced features to that library, but it already supports quite a few useful things.


#5

Great topic as there is no way I would trust any third party with such data, so if I ever buy cameras I’d like them to be able to function in my home network.

I think boot up time is not that important as you won’t be away in a second. Even a one minute boot would be okay.

@ConnorRigby why send the images in base64? Are the raspberry pis too slow to perform at least some compression before sending the data on the wire? For a home network the bandwidth is not a big problem and a bigger computer could receive the images and encode them in a video, but it’s still kind of miserable as most of the time the image won’t change at all. At the very least the PI should send a new image with a timestamp only when the difference with the last one is more than a given threshold, and ideally it would do that only when that diff is big enough that it could be a human and thus potential thief/threat (who cares if a bird or a cat goes in front of the camera for a while?).

This diff algorithm optimised for threat detection is really the killer service I would expect from such libraries :slight_smile:

Oh, and it’s always good to keep some offline data, at least the last couple of hours (that won’t take much space with the aforementioned algorithm). The raspberry pi would be enclosed in the wall so people could destroy the camera but not the brains.


#6

Because on the device its a one liner to encode as base64 and on the client its a one liner to decode as base64, and Phoenix Channels don’t support binary data easily. I built that project in a matter of about 45 minutes start to finish more as a joke than anything. I never expected it to preform well, last long, scale or anything like that.


#7

That’s awesome Greg! Good luck with the course :023: Perhaps you could do a blog post or something on the topic afterwards? I think a lot of people might be interested in this - not just in this community but the wider IP Cam community.

The Picam library looks awesome :smiley:


#8

I would wall it off in LAN and record its RTSP stream. Connect IP cam to Pi via USB-ethernet adapter, connect Pi to the internet via on-board ethernet port, run an elixir app to proxy&record rtsp stream. Right now I’m basically building a much more feature complete version of this.