Nerves + Raspberry Pi 5 + Camera Module 3 not working

Hi Nerves friends,

I’m running Nerves on Raspberry Pi 5 and want to record video with my Raspberry Pi Camera Module 3.

On both a brand new mix nerves.new firmware burn and with nerves_livebook_rpi5.fw, I get this:

iex(1)> cmd("libcamera-hello --list-cameras")

No cameras available!
0

On another SD card I’m running Raspbian, and it works just fine:

Has anybody tried Nerves + Raspberry Pi 5 + Camera Module 3, or have any ideas what I’m missing?

Thanks :slight_smile:

1 Like

I added support for the new Global Shutter Camera to the rpi4 module. Should be a very similar update. I’m sure the team would love a contribution. adds the imx296 overlays for the Raspberry Pi Global Shutter Camera by entone · Pull Request #221 · nerves-project/nerves_system_rpi4 · GitHub

1 Like

Camera Module 3 seems to be using imx708, which is already present in nerves_system_rpi5 in the way you added imx296 in the linked PR.

1 Like

I am also running into this on the master nerves rpi 5 branch. Strangely, dmesg shows my imx708 is detected, but libcamera-vid does not show any cameras.

Has anyone gotten this setup to work?

1 Like

I’m working on getting the IMX296 Global Shutter Camera working on Raspberry Pi 5. I think there’s a few things I’ll need to do. Documenting here.

The plan: 1) update config.txt to NOT autodetect cameras and provide a custom overlay. 2) update fwup.conf to point to the new, modified config.txt.

Nerves seems to support editing the boot partition, where config.txt lives: Advanced Configuration — nerves v1.11.3

I’ve now:

  1. Copied the base config.txt to my config/ folder
cp deps/nerves_system_rpi5/config.txt config/
  1. Copied the fwup.conf to my config/ folder as well
cp deps/nerves_system_rpi5/fwup.conf config/
  1. Updated Config.exs to point to the new fwup.conf
# To use our own fwup.conf, which will pull in our modified /boot/firmware/config.txt
# instead of the default config.txt to support the global shutter camera
config :nerves, :firmware,
  rootfs_overlay: "rootfs_overlay",
  fwup_conf: "config/fwup.conf" 
  1. In config/config.txt found and disabled camera_auto_detect and added dtoverlay=imx296

This Raspberry Pi camera documentation is where I learned that to use the GS camera IMX 296, I can’t have camera auto detection on and also need to add this new overlay.

# Automatically load overlays for detected cameras
# This is disabled for this project which uses an IMX296 Global Shutter Camera
camera_auto_detect=0
# It will default to checking camera connector 1.
dtoverlay=imx296
  1. In my config/fwup.conf, I found (line 12 or 13) and modified where it expects to find config.txt to point to my new config/config.txt:

It now reads:

file-resource config.txt { host-path = "${NERVES_APP}/config/config.txt" }
  1. I ensured I have the camera ribbon cables in correctly. The plastic stiffener faces the CPU on the Raspberry PI 5, a reversal from previous boards. I put the cable into the one marked “cam1” because “If your Raspberry Pi has two camera connectors (Raspberry Pi 5 or one of the Compute Modules, for example), then you can specify the use of camera connector 0 by adding ,cam0 to the dtoverlay that you used from the table above. If you do not add this, it will default to checking camera connector 1.” (Raspberry Pi Docs).

Unfortunately, when I ssh nerves.local to start an iex session on the device, it doesn’t seem to have helped:

cmd("libcamera-still --list")
No cameras available!

or

cmd("libcamera-still -n -v -o /data/test.jpeg")
...elided...
ERROR: *** no cameras available ***

I’ll post a followup if I succeed, but if anyone has any ideas they want to see me try, I’m all ears.

1 Like

I have a similar issue with imx708, you can find a recent conversation about this on slack: Slack . Frank Hunleth (along with the Nerves community) is working on it.

@alvises were you bringing the camera kit to NervesConf EU?

1 Like

yeah :smiley:

We’re able to list cameras and take images now. Update to the nerves_system_rpi5 v0.6.3 release and you’ll get the fixes. The change notes have details. Please post if you have any more camera issues.

3 Likes

I am trying to capture video with libcamera-vid and the RPi Camera3 is listed and I can also create jpegs with libcamery-still but running libcamery-vid results an the following error:

iex(17)> cmd “rpicam-vid -t 1000 -o /data/vid”
[0:45:19.682821444] [557]  INFO Camera camera_manager.cpp:326 libcamera v0.5.0
[0:45:19.683841672] [558]  INFO RPI pisp.cpp:720 libpisp version 000000000000-invalid 12-10-2025 (22:38:43)
[0:45:19.685148235] [558]  WARN CameraSensorProperties camera_sensor_properties.cpp:473 No static properties available for ‘imx708_wide_noir’
[0:45:19.685167253] [558]  WARN CameraSensorProperties camera_sensor_properties.cpp:475 Please consider updating the camera sensor properties database
[0:45:19.787485708] [558]  WARN CameraSensor camera_sensor_legacy.cpp:501 ‘imx708_wide_noir’: No sensor delays found in static properties. Assuming unverified defaults.
[0:45:19.788254527] [558]  INFO RPI pisp.cpp:1179 Registered camera /base/axi/pcie@1000120000/rp1/i2c@88000/imx708@1a to CFE device /dev/media0 and ISP device /dev/media2 using PiSP variant BCM2712_C0
Preview window unavailable
Mode selection for 640:480:12:P
      SRGGB10_CSI2P,1536x864/0 - Score: 1486.67
      SRGGB10_CSI2P,2304x1296/0 - Score: 1786.67
      SRGGB10_CSI2P,4608x2592/0 - Score: 2686.67
Stream configuration adjusted
[0:45:19.792776996] [557]  INFO Camera camera.cpp:1205 configuring streams: (0) 640x480-YUV420 (1) 1536x864-BGGR_PISP_COMP1
[0:45:19.792992720] [558]  INFO RPI pisp.cpp:1483 Sensor: /base/axi/pcie@1000120000/rp1/i2c@88000/imx708@1a - Selected sensor format: 1536x864-SBGGR10_1X10 - Selected CFE format: 1536x864-PC1B
ERROR: *** Unable to find an appropriate H.264 codec ***
255

libav is not compiled into rpicam-apps and I only can select –-codec yuv420 but the created file is not readable or regongized by ffmpeg.

I tried to compile rpicam-apps and/or libcamera in my custom nerves_system_rpi5 but had no success so far. as I could not find the libav options in any of the packages.

edit: looking at the source:

ifeq ($(BR2_PACKAGE_FFMPEG)$(BR2_PACKAGE_LIBDRM),yy)

it looks like I need to activate ffmpeg and libdrm.. I only tried ffmpeg :roll_eyes:

edit2: this seems to bee the real source:

The rpicam-apps is provided as an external to buildroot.

edit3: actually I only need to rebuild rbicam-apps after adding ffmpeg as libdrm was already installed as an dependenciy for rpi-libcamera? Still a little bit confused by now… :thinking:

1 Like

Solved it, I needed to add

BR2_PACKAGE_FFMPEG=y
BR2_PACKAGE_FFMPEG_GPL=y
BR2_PACKAGE_X264=y

to have rpicam-apps compiled with libav and x264 support.

4 Likes

@alfredfriedrich Thanks for posting your solution!

2 Likes

I am glad, I can give something back.

My current command call is

rpicam-vid -t 0 --framerate -1.0 --width 1536 --height 864 --codec libav --profile Main --libav-video-codec libx264 --libav-format h264 --low-latency true --nopreview -o -

And I am trying use the membrane framework to stream that to a browser with HLS using hls.js in a LiveView, but I do get a delay of around 15s,

My final goal is to recreate the nx_hailo setup with membrane insterad of evision to have a custom plugin/filter that can be configured via an LiveView page somehow. Still work in progress :wink:

2 Likes

rpi5 doesn’t have hardware video encoders, software encoding (CPU) can be slow, nowhere near 15s though. If you setup webrtc stream with membrane or boombox to your liveview you should get much better latency.

If I stream with –codec libav --libav-format mpegts from the command line and watch it with ffplay the delay is under 1s. Same setup with –codec libav --libav-format h264 gets a delay around 2-3s.

I guess the delay is not due to my membrane setup but per se, but the combination of target_windw_size and buffer in the hls.js player.

I tried WebRTC with membrane but was not able to get an connection between Player and Sink iirc without an ICE Server (passed an empty list in the JS and Pipeline). But I need an offline capable solution.

HLS (HTTP Live Streaming for those following along at home) has a lot of inherent latency. I see reports of between 6 and 30s.

Like Damir said. WebRTC, WHIP and so on are much lower latency.