How to build a photobooth with Elixir+Nerves?

So, I finally have a good reason to start tinkering with Nerves! :heart:

I’ve been asked by the student association to jamble together a Photobooth for an upcoming party. So, I want to take this opportunity to write a Nerves-application on top of a Raspberry Pi.

I already found the Picam elixir/nerves library to take pictures with the Raspi’s camera. What I am still looking for is a way to interact with a (touch?)screen from within Elixir/Nerves, because I’d really like to show the camera preview on there.

Should I just go ahead and use the Erlang Wx-bindings for that? Or is there a better way?
And is there already a way to read out touch screen presses from within Nerves?

4 Likes

I believe this would be a better way as far as this become available:

2 Likes

the camera is easy to hookup and interface… I did one with phoenix/nerves and after the Drab boilerplate it took like 5 minutes to have it working - press a button, base64 it and show it in the browser… (accessed from a desktop browser pointed at the phoenix server)

so you could do that, and then interface it from an ipad/laptop or similar.

you can then progress it with https://github.com/LeToteTeam/kiosk_system_rpi3 and a screen/touch interface running on the rpi itself…

so probably what I would recommend, you quickly have a working solution, and then you can progress that to running on the pi itself…

4 Likes

It has been quite smooth sailing so far.

  1. Installing Phoenix on a Nerves system is a breeze.
  2. Working with the Raspberry Pi camera using Picam is quite easy.
  3. Using the Kiosk mode is okay, but you will really need the FTDI/UART to USB cable to debug it (since the screen will obv. only show the kiosk webpage).

Only (super minor) gotcha’s I’ve had are:

  1. My router does not seem to like Rpi’s, so connecting using WLAN is not possible over there (I now use my mobile phone as hotspot).
  2. The kiosk connects using the internet connection :nerves_network, :iface rather than whatever you’ve set up as default in your :nerves_network, :default configuration.
  3. The kiosk will only load a local page (like the local phoenix) when using IP address 0.0.0.0. I don’t know why but 127.0.0.1 and localhost will result in an ERR_NETWORK_CHANGE error page in the webkit kiosk browser.
  4. The Picam library has a nice example of streaming camera images to the browser using an mjpg stream. This works fine when connecting to the Pi with an outside device (and using that device’s browser), but the kiosk webkit variant will only load the first frame of the stream.

Currently I’m investigating other ways to load the camera feed as a ‘stream’. Now fetching individual images in an interval, although I think that sending JPGs over a persistent websocket connection (a Phoenix channel) might be faster.

4 Likes

The current progress can be tracked over here on GitHub. Do note that this is rushed late-evening code because it is for a party I will be hosting with a couple of friends in less than a week from now; first make it work, then make it clean :slight_smile: .

New developments:

  • I bought the 7" ‘official’ Raspberry Pi screen, which works out-of-the box with the Kiosk interface. Really nice! :heart_eyes:.
  • The system as it is now works. However, there is one thing I’d like to change:

Sending PiCam images between the server and the Kiosk-Browser is slow

Currently there’s less than five frames per second with a delay of one or two seconds. This is not strange, because the Picam is currently read out from the actual camera inside Phoenix (using the Picam library that wraps a C library using a port), and then each of these images is sent to the front-end over a local socket connection.

At first I thought it was the rendering that was slow, but I actually think that it is mostly the copying over from the backend to the frontend, which of course increases cubically once you try to increase the (width, height, frames per second).

A better approach would be to use the Picam as a ‘user webcam’ using modern browser’s getUserMedia functionality. The Qt-Webengine supports this. However: For user safety, this is only allowed on an https domain or on localhost.

But I currently host the app on http://0.0.0.0 because even when entering http://localhost as the page to connect to, the kiosk browser shows an ERR_NETWORK_CHANGED error page. If anyone knows how to resolve this problem, I would be very grateful! :heart:

1 Like

fyi, repo gives 404 - so can’t comment on code… I assume you push the image, and have played with Picam.set_fps and https://hexdocs.pm/picam/Picam.html#set_sensor_mode/1

which rpi is it? the original is dogslow compared to the rpi3… also try pointing the kiosk to a static page and access the nerves/phoenix server from a desktop browser to check performance…

one idea would be to use http://jsmpeg.com - but I think you have to mux to ts, which I’m not sure how easy it is to do…

another idea would be to add a thermal printer, so it prints 4 monochrome images out like a classic photobooth…

3 Likes

Ah! Good catch; I still had the repository set to private. Fixed now :slight_smile:

I am using a Raspberry Pi 3, and vist a static Phoenix page using its Kiosk. This page contains a canvas (At first I tried using an image tag with an mjpg source like the Picam example uses, but the qt-webengine-kiosk will only show the first image of the stream.), and will periodically load a new image by calling another Phoenix endpoint.

2 Likes

random thoughts:
think Picam.set_size is a resize op (so not totally free)… try Picam.set_sensor_mode(4) as well to ensure the raw image isn’t 3k+ px wide (binning should also improve quality) - and set_size as full, half or quarter size of the raw image size (depends on which camera v1/v2 you have) for optimal resize qual/speed - and Picam.set_fps to something reasonable like 5 to begin with… see if that stream thing can be made to work…

I would also try to slow down the stream example, but ymmv eg:

  defp send_pictures(conn) do
    send_picture(conn)
    :timer.sleep(100)
    send_pictures(conn)
  end

another thing that caught my eye is that you are taking two images and IO.inspect()ing what I assume is a sizable jpg in the take_picture function… so maybe stick to one image and no logging out a big binary…

if you are adventurous you can try and bump the qt build from 5.6 to 5.10 and get a much more recent chrome/webengine with BR2_PACKAGE_QT5_VERSION_LATEST=y in nerves_defconfig - I do assume breakage though.

I did it with Drab, pushing the image base64 encoded… I don’t remember specific latency (as it was turning on leds to illuminate first, waiting for camera adjust and then snapping the photo) - but if you haven’t tried Drab, maybe a good time to do so, should give you tighter control of the updating/loop, and avoid writing client js…

3 Likes

Cool project! @mobileoverlord and @electricshaman have spent a lot more time on the Le Tote kiosk system than I have, but I’m surprised that it doesn’t load content from localhost. Is it possible that the Phoenix server you’re running is not listening on the localhost interface or is blocking the page due to other security constraints? If you’re not sure, I’ve found it really helpful to remotely attach to the Chrome Remote Debugger from my laptop, so I can see the embedded browser’s error console and such. We have an internal way that we do that at Le Tote, but I don’t fully understand how it works myself, so I’ll defer to the others to explain that process.

In my experience, using a Plug-based MJPEG video streamer with Picam is significantly more performant than passing frames individually over a Phoenix Channel connection. I suspect it’s mostly the overhead of re-encoding the frames as Base64 and wrapping them in the Channel payload, because it’s slow even when you’re viewing it on a modern laptop.

When using MJPEG streaming to a laptop, I’m able to get smooth video performance even at 1080p, but I haven’t tried doing it locally in a kiosk the way you’re describing. I think the “right” way to do that is probably to avoid sending it through JavaScript at all, as you mentioned, but I think we’d need to add some more features to Picam to support that.

5 Likes

Yes, this is definitely true, and Picam has an example of how to set it up where it streams directly into an <img /> tag. This works on Chrome and FireFox on my laptop. However, the QtWebkit that the Le Tote kiosk uses will for some reason (old version of Webkit?) only render the first frame of the MJPEG stream, which means that this option is not possible.

Attaching the remote debugger to there is a very smart idea :+1:.

1 Like

Hi,

Sorry for coming to this thread late.

In my experience, using a Plug-based MJPEG video streamer with Picam is significantly more performant than passing frames individually over a Phoenix Channel connection.

Yes, Base64 encoding and sending over Phoenix channels gets pretty juddery. I’ve had pretty good success with raw websockets and sending the individual JPEGs as binary messages.

I’ll look into opening the projects if anyone’s interested.

2 Likes

Yes, please!

1 Like

I’ve realised that we’ve a public version already. I recall it took a while to figure out but it’s actually super simple.

It uses a Cowboy (1) Websocket handler to serve the images over binary. It could do with a bit more of a tidy, tbh, but anyway:

The server is set up here: https://github.com/CultivateHQ/marty-nerves/blob/master/apps/image_server/lib/image_server/application.ex

The websocket handler is here: https://github.com/CultivateHQ/marty-nerves/blob/master/apps/image_server/lib/image_server/images_from_camera_websocket_handler.ex . It sends an image once very 50 milliseconds.

The Javasript on the other end is here: https://github.com/CultivateHQ/marty-nerves/blob/master/apps/marty_web/assets/js/camera.js

4 Likes