Prototype to Production Nerves Screencasts


I’ve been loosely planning a series of screencasts for a Nerves based project for about a year and a half now. I believe i finally have enough content and time to start it.
This is going to be a full stack, open source “Internet of Things” project. Here is an overview of all the parts:

A wireless sensor system

This will be the main feature we build out. Many sensors and actuators will be able to report back
a central “hub” device (our Nerves application). Plans so far include:

  • Temperature
  • Humidity
  • Light level
  • Remote relays

With the capability of more being added easily. Each sensor device will be able to receive remote
firmware upgrades, but otherwise be fairly “dumb”. These devices will not themselves be Nerves
devices, instead they will be Arduino based devices to keep costs down and encourage battery life.

A central “hub” or “gateway”

This will be the Nerves application. The sensors and actuators will report their data and receive commands
to and from this “hub” device. Initially, we will target the Raspberry Pi 0 W, again to keep costs down. Eventually we may introduce support for other supported devices.

The device will be developed “offline first”. This means it will be “connected”, but will
still work when disconnected. The fleet of devices will also be managed by NervesHub. This will be how we manage pushing updates to the deployed devices. (IE devices that would have been purchased)

A cloud web application

This will be a Phoenix based web app. I personally am not a web or JS developer, so it will likely not look
the greatest at first. The backend services will be the main focus of the series. Things the web app should
be in charge of:

  • User registration/auth
  • Device registration/auth
  • Serving/receiving RESTful assets
  • Serving a frontend of some sort
  • Handling many simultaneous device connections via Phoenix channels
  • Handling many simultaneous user/frontend connections via Phoenix channels
  • ???

Initially the app will likely be deployed on DigitalOcean or Vultr. We won’t use Heroku or Gigalixir because
they terminate SSL connections before they reach the application layer. We will need SSL to validate devices against NervesHub. This will greatly simplify the logic required by the web app to validate devices.

Target Audience

No Nerves knowledge should be required, but I will likely make some assumptions when getting started with the streams. I’ll assume a basic knowledge of Elixir at a basic level. We won’t be doing anything “advanced” in either main applications, but a basic knowledge of Elixir syntax and how to work within Mix and ExUnit will be expected. I’ll also assume a basic knowledge of Phoenix or at least how an MVC framework works. I myself am certainly not an expert in Web Apps or Phoenix, so this is definitely not required.


I haven’t started anything on this yet. This includes a name, domain, etc. I plan on doing all the bootstraping live. That said i could use some help picking a name for this project.


There will probably be more casts at first to get the app bootstraped and to handle “devops” things like setting up the domains, Droplets, etc. These first few streams will likely be a little dense. Once up and running, I will likely do one or two screencasts a week, focusing on one or two small features at a time. This is mostly to prevent any burnout, and depending on how i feel, we may do more or less than this.

I will be streaming live on Twitch as I’ve used it before (I’m open to other streaming sites if someone makes a good argument). I will also publish recorded casts to YouTube as I don’t believe Twitch stores any streams once offline. I haven’t decided when the first stream will be, but I’m hoping to get started some time this week, or next. I’m thinking i will stream around 6PM Pacific time, but am open to suggestions for what time works best for the majority of users.

Licensing and other Copyright stuff

I plan on releasing all code as either MIT or APACHE-2. A final decision will be made upon first stream.


Sounds great Connor! I love screencasts/videos :lol:


Thats a fantastic idea - and exactly the project I want to tackle as a nerves newbie. Looking forward to the series!


Agree with everyone here, this is fantastic. Streaming in general adds a ton of value and I love that the top Elixir guys (you included) are very open to doing this now.

Sometimes it is hard to gauge the value add. Please don’t be discouraged if you don’t get many views. I believe this is important to note, since all in all that market for someone looking at Nerves stuff is not that big. With that said, the impact you can have on just one person is a better metric in my opinion.

That impact may also not come from what you expect. Value could come from something small, completely unrelated to what you are trying to relay at the time. Maybe you are trying to talk about GenServers and you create a .iex.exs file with some code to speed up an iex session instead of copying/pasting or writing stuff again. You blow someone’s mind without knowing about it. This is the true value of streaming I think, to capture the things you don’t know people don’t know.

Thanks for doing this. I just don’t want folks to be discouraged when streaming and love the fact that you are stepping up and giving back to the community. Other folks thinking about streaming, just do it please!

Rock on @ConnorRigby!


Thanks for the support guys.
I plan on doing doing a test stream on Friday afternoon i think. This will be mostly setup of the screencap to and camera and whatnot. If i have time i will also maybe setup domains, github repos, and other things as such.
I still need to choose a name, so hopefully something comes up by Friday… :slightly_smiling_face:


I may continue with my plan which is almost identical to yours. I have other ideas as well, so I don’t know yet. Good luck! I’ll try to catch some of the twitch streams.


This will be a good forum for me to understand nerves more, since I plan on using it a lot in the near future.


@ConnorRigby keen, ready to follow with my PI Zero :smiley: chuffed to have the same hardware


That’s the idea! I want to keep everything cheap so folks can actually build their own if they want.


Alright I lied about the test stream being today. It will actually officially be tomorrow afternoon.
Again, this first stream will not be super interesting. Mostly setup and various book keeping things.

Stay tuned


it will be great if you can use mqtt protocol suing elixir, for device to server communication. i have followed you on twitch please start streaming it will be very helpful for noob like me(in elixir :stuck_out_tongue: ).

1 Like

Please share your twitch profile so we can catch up your streams on elixir.

1 Like

I plan on using Phoenix channels as it doesn’t require an extra service to be ran. Maybe we can look at using MQTT as a transport for channels though


Hey :wave: did you manage to do the stream :smile:

I did a test stream Sunday night where i did a bunch of boring stuff like purchasing domain and setting up a GitHub org and repos. Next one will be the first real stream where we get actual progress. Will be Saturday or Sunday.

Super duper keen :pray:

1 Like

This is getting even more interesting now :grinning:
I think the main challenge will be the device management aspect for that I suggest to look into LwM2M OMA Standard, uses CoAP/udp , but there are also some work ongoing to run on top of MQTT/tcp.

There are plenty of lwm2m client libraries available for embedded devices. not so much on the BEAM,
only implementations I know of are: which serve as base for

btw that can be another idea, deploy (light) on the central hub or gateway of yours.

back to my original point you could look into
< lwm2m client> ----- udp ------ < lwm2m server/ your central Hub> ------ channels/whatever ---- < phoenix app>

Now on the phoenix App whom would be able to implement device management calls based on the lwm2m implementation …

I think that would be a winner architecture.
extra info the the measurements or observation, lwm2m is aligned with SenML RFC which is another spec to represent a sensor measurement and device parameters.

1 Like

I think these are all pretty good ideas, but will probably keep it a bit more simple than this, at least to start out. I’ve used Coap before and it’s a bit more complex than i have initially planned for this project. Especially since i plan on doing this as a mostly “offline first” system.

1 Like

Going to be going live today (2018-12-22) at around 12:30 PM pacific time.

Plan for today is to get an initial “hello world” communication between the Raspberry Pi gateway and one Arduino based sensor. There are a few different avenues for us to take to get a communication mechanism going and i hope to explore a couple of them. I myself am not 100% sure of which method we will inevitably end up using but this “episode” will be the official first in the series.

Stay tuned:


I was looking for your test stream on your Twitch profile, but can’t see it. Shouldn’t Twitch display it for 14 days or this is expected behavior?

I’m asking because I’m wondering whether or not I would be able to watch your streams later if I’m not able to watch it live.

1 Like