This depends on a few things
the device chosen
Raspberry Pi 3 B+ for example boots about 4 times faster than Raspberry Pi 0 because it has 4 more cores.
The code written
Because Nerves makes such heavy use of GenServers compared to a stateless web app, many GenServers will have an affect on application boot time. Heres an example. (PS i omitted the GenServer fluff functions for clarity)
This code initializes some thing (say the camera?) in the init/1
GenServer callback. This will block booting the next server in the supervisor.
def init(args) do
inital_value = SomeNameSpace.resource_intensive_initialization()
{:ok, %{value: initial_value}}
end
This code initializes state
data without actually doing resource intensive things.
def init(args) do
# Send this module a `timeout` message in 0 ms
{:ok, %{value: nil, initialized: false}, 0}
end
# Called when returning an integer as the last element in the `init/1` tuple.
def handle_info(:timeout, %{value: nil, initialized: false} = state) do
inital_value = SomeNameSpace.resource_intensive_initialization()
{:ok, %{state | initialized: true, value: initial_value}}
end
Obviously you will have to account for this sort of thing in other GenServers in the system. When building Nerves applications, i find a firm understanding of OTP principles really helps.
Dependencies
Every dependency you add has the possibility of blocking the boot time. This is unfortunate, but reality. Do you really need that library for adding numbers together? maybe You can spin that yourself?
I feel as if i somewhat answered this in the first section, but for the most part, one should think of Nerves as a fairly standard OTP release. It is a normal Elixir application. If you just do mix nerves.new hello_nerves
, there are no runtime nerves dependencies. On RPI3 i find a basic application will come up in about 10 seconds. That said there are things you won’t have any control over:
- Network connection time - the time it takes from boot to getting connected to the internet/network
- Network latency - the time it takes to actually get your picture data from point a to point b
- Hardware device initialization (camera again as a concrete example)
I want to emphasize that one should not do what i did with TurtleTube for anything you care about. I’ll briefly summarize the hacks employed in this short project:
“Video” streaming
“Video” is a facade. What’s really happening is an image is being captured as fast as possible and dispatched to the server.
Transport mechanism
I literally just Base.encode64!(jpeg_data_from_camera)
and sent that over a Phoenix Channel. On the
client (javascript) side, i’m just replacing a <img>
tag with the contents of that image. This will not scale, and you can really see the lag if i’m say uploading new firmware (a relatively resource intensive task)
Security
what?
Now this all isn’t to say Nerves isn’t the right tool for this job, but i would not feel comfortable ever selling it haha.
Final thoughts
A motorized camera was mentioned. One could use a simple “servo” type stepper motor to do this easily. This can fairly simple be controled via an Arduino, or even by the Nerves device’s GPIO.
Another though i had for boot time/network speed is that the newest Raspberry Pi 3 B+ supports “Power over Ethernet” meaning power and networking all in the same cable. I believe this also opens the door for sleep/hibernate, which is essentially no power consumption while also still being “on” meaning you won’t have to reinitialized every boot. I don’t know a ton about this, but it’s certainly something to keep in mind.
Disk space is another concern that came to mind. What happens while offline? still capture and buffer locally? SD cards are not particularly well suited for large video writing. Single images are usually fine though. This adds another bit of complexity however.
Anyway i like the idea and would be interested to see what others have to say