Beaglebone 1-Wire Help - trying to enable Linux driver support for one wire GPIO

I’ve been unsuccessfully trying to enable Linux driver support for one wire GPIO on my Beaglebone Green board and am looking for a couple pointers. All of the discussion around 1-wire which is specific to Nerves that I have been able to find focuses on the raspberry pi and it’s “config.txt” file which doesn’t seem to apply to a beaglebone. The beaglebone configuration method seems to have changed at some point (capemanager to uboot parameters). My understanding is that the current method is to apply a device tree overlay (per the nerves_system_bbb docs) but I have not yet gotten the method to work.

To do this I have run cmd("fw_setenv uboot_overlay_addr7 /lib/firmware/BB-W1-P9.12.00a0.dtbo") inside the devices iex terminal and confirmed that the variable exists with cmd("fw_printenv"). I have a one-wire temperature sensor wired to P9-12. After rebooting I don’t see anything in /sys/bus /sys/devices or /dev which would suggest that onewire is loaded. Based on other general beaglebone research, I also zeroed out the eMMC in case that was interfering with the nerves SD card’s Uboot, but that didn’t change anything. I’ve tried uboot_overlay_addr7 and uboot_overlay_addr0 but I don’t see any change. cmd("modprobe wire") and cmd("modprobe w1-gpio") both fail indicating they can’t find the modules.

Any advice on where to go next would be much appreciated. A couple immediate questions I have:

  • Is there a way to tell if onewire is actually enabled but my sensor isn’t hooked up correctly. I’ve double checked and am pretty confident in my sensor’s wiring and have assumed that at least the w1 directory would pop up somewhere if the overlay was loaded but the sensor was not correctly hooked up - is that accurate?
  • I wasn’t able to get a good grasp on which uboot_overlay_addr to use. I vaguely understand that 0-4 are for “default” or “autodetect” capes, and 5-7 are for “user” or “manual” overlays but I’ve seen tutorials which use 0 and 7 which is why those are the ones I tried. Does the particular uboot_overlay_addr I load the overlay in matter, and if so which should I use for enabling one-wire on nerves?
  • Do I need a custom nerves image to enable 1-wire? My next step was to try this just to poke around in the buildroot settings and see if there was something I could turn on to enable 1-wire support (maybe it’s not built in by default?). Unfortunately I hit a couple difficulties getting my build environment setup which I am still working through. In the meantime, if someone has input on whether or not this is likely to resolve my issue that would be a big help!

Any and all suggestions are welcome and appreciated!

1 Like

You may want to try setting the overlay in fwup_include/provisioning.conf and see if that makes a difference. Here’s an example of that in one of my custom Nerves systems (line 23):

It could also help to try another overlay to troubleshoot if loading the overlay is the issue, or if the issue is specific to the 1-wire overlay. If you use the ADC overlay BB-ADC-00A0.dtbo, it should show up at /sys/bus/iio/devices/iio:device0.

Once you get the overlay loaded and it shows up in sysfs, the pins should be muxed correctly. If not, use the config_pin command on the device to set that.

Hopefully this gets you one step further in the right direction. :crossed_fingers:


Thanks for the suggestions @amclain.

I can’t do exactly what you did until I get buildroot up and working, but I copied and edited the existing provisioning.conf and used the NERVES_PROVISIONING=/path/to/provisioning.conf environment variable to override it. I could see that my additions were in the output of fw_printenv as expected, but I could not get either the ADC nor the 1-wire device tree overlay to actually show up in sysfs.

I guess it’s looking more and more like I need to just bite the bullet and get a full custom image up and running…

1 Like

Ok, so I got a environment which seems to almost build a custom buildroot image but I’m stuck on some confusing issues. In particular, I can make menuconfig and poke around but the actual build eventually fails because O_BINARY or O_SEARCH are not defined. This is really confusing to me because I thought buildroot was supposed to bring it’s own toolchain and glibc and everything along with it so the actual host only needed to have just enough to get that bootstrapped. Do these compilation errors mean anything to someone?

1 Like

Buildroot brings in a toolchain. Can you post the build output so we can see the compilation errors?

1 Like

Here is the result of running make. I’m using this development environment. On this computer it seems like there is also a timezone_t issue of some sort, but I suspect it’s all symptoms of the same underlying issue.

>>> host-tar 1.29 Building
PATH="/home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/host/bin:/home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/host/sbin:/run/wrappers/bin:/usr/bin:/usr/sbin:/nix/store/n1jgqr8xzjz9shn3ads5x07p8lqn5rqk-patchelf-0.14.5/bin:/nix/store/47n5hzqpahs7yv84ia6cxp3jg9ca8r86-coreutils-9.0/bin:/nix/store/6ib6hn9fq8mgkdq2nq5f7kz050p49rp2-findutils-4.9.0/bin:/nix/store/685c5dr4agkf7vx8ya7f1r9rd9qwg2ri-diffutils-3.8/bin:/nix/store/sppjn85p06m1il70kd05drg1j26cjxd3-gnused-4.8/bin:/nix/store/49vp3yp54fqliy7k8gvxsybd50l9a82f-gnugrep-3.7/bin:/nix/store/fr7vrxblkj327ypn3vhjwfhf19lddqqd-gawk-5.1.1/bin:/nix/store/5p3qyadsv163m7zvqssiw80zh6xfv2jv-gnutar-1.34/bin:/nix/store/2bwqikh67y1231ccb71gjfrggwjw066q-gzip-1.12/bin:/nix/store/wjf2554ffvap47vanabh9lk0dmj1q295-bzip2-" PKG_CONFIG="/home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/host/bin/pkg-config" PKG_CONFIG_SYSROOT_DIR="/" PKG_CONFIG_ALLOW_SYSTEM_CFLAGS=1 PKG_CONFIG_ALLOW_SYSTEM_LIBS=1 PKG_CONFIG_LIBDIR="/home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/host/lib/pkgconfig:/home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/host/share/pkgconfig"  /usr/bin/make -j17  -C /home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/build/host-tar-1.29/
/usr/bin/make  all-recursive
Making all in doc
make[4]: Nothing to be done for 'all'.
Making all in gnu
/usr/bin/make  all-recursive
  CC       fprintftime.o
  CC       modechange.o
  CC       priv-set.o
  CC       openat-die.o
  CC       quotearg.o
  CC       progname.o
  CC       parse-datetime.o
  CC       safe-read.o
  CC       safe-write.o
  CC       save-cwd.o
  CC       savedir.o
  CC       se-context.o
  CC       se-selinux.o
  CC       stat-time.o
  CC       statat.o
  CC       strftime.o
  CC       strnlen1.o
In file included from strftime.c:31:
strftime.h:29:19: error: unknown type name ‘timezone_t’; did you mean ‘timer_t’?
   29 |                   timezone_t __tz, int __ns);
      |                   ^~~~~~~~~~
      |                   timer_t
In file included from strftime.c:29,
                 from fprintftime.c:2:
fprintftime.h:29:21: error: unknown type name ‘timezone_t’; did you mean ‘timer_t’?
   29 |                     timezone_t zone, int nanoseconds);
      |                     ^~~~~~~~~~
      |                     timer_t
modechange.c: In function ‘mode_compile’:
modechange.c:161:63: error: ‘S_IRWXUGO’ undeclared (first use in this function); did you mean ‘S_IRWXO’?
  161 |                    ? (mode & (S_ISUID | S_ISGID)) | S_ISVTX | S_IRWXUGO
      |                                                               ^~~~~~~~~
      |                                                               S_IRWXO
modechange.c:161:63: note: each undeclared identifier is reported only once for each function it appears in
make[6]: *** [Makefile:1834: modechange.o] Error 1
make[6]: *** Waiting for unfinished jobs....
strftime.c:388:28: error: unknown type name ‘timezone_t’; did you mean ‘timer_t’?
  388 | # define extra_args_spec , timezone_t tz, int ns
      |                            ^~~~~~~~~~
strftime.c:411:37: note: in expansion of macro ‘extra_args_spec’
  411 |                 const struct tm *tp extra_args_spec LOCALE_PARAM_PROTO)
      |                                     ^~~~~~~~~~~~~~~
In file included from fprintftime.c:2:
strftime.c:388:28: error: unknown type name ‘timezone_t’; did you mean ‘timer_t’?
  388 | # define extra_args_spec , timezone_t tz, int ns
      |                            ^~~~~~~~~~
strftime.c:411:37: note: in expansion of macro ‘extra_args_spec’
  411 |                 const struct tm *tp extra_args_spec LOCALE_PARAM_PROTO)
      |                                     ^~~~~~~~~~~~~~~
strftime.c:388:28: error: unknown type name ‘timezone_t’; did you mean ‘timer_t’?
  388 | # define extra_args_spec , timezone_t tz, int ns
      |                            ^~~~~~~~~~
strftime.c:1457:34: note: in expansion of macro ‘extra_args_spec’
 1457 |              const struct tm *tp extra_args_spec LOCALE_PARAM_PROTO)
      |                                  ^~~~~~~~~~~~~~~
strftime.c:388:28: error: unknown type name ‘timezone_t’; did you mean ‘timer_t’?
  388 | # define extra_args_spec , timezone_t tz, int ns
      |                            ^~~~~~~~~~
strftime.c:1457:34: note: in expansion of macro ‘extra_args_spec’
 1457 |              const struct tm *tp extra_args_spec LOCALE_PARAM_PROTO)
      |                                  ^~~~~~~~~~~~~~~
make[6]: *** [Makefile:1834: fprintftime.o] Error 1
make[6]: *** [Makefile:1834: strftime.o] Error 1
save-cwd.c: In function ‘save_cwd’:
save-cwd.c:67:26: error: ‘O_SEARCH’ undeclared (first use in this function)
   67 |   cwd->desc = open (".", O_SEARCH);
      |                          ^~~~~~~~
save-cwd.c:67:26: note: each undeclared identifier is reported only once for each function it appears in
make[6]: *** [Makefile:1834: save-cwd.o] Error 1
make[5]: *** [Makefile:1854: all-recursive] Error 1
make[4]: *** [Makefile:1524: all] Error 2
make[3]: *** [Makefile:1338: all-recursive] Error 1
make[2]: *** [Makefile:1277: all] Error 2
make[1]: *** [package/ /home/larry/projects/elixirNerves/nerves_system_bbb_cockpit/.nerves/artifacts/nerves_system_bbb_cockpit-portable-0.1.0/build/host-tar-1.29/.stamp_built] Error 2
make: *** [Makefile:23: _all] Error 2
1 Like

The build hasn’t gotten to the cross-compilation parts yet. Buildroot is building tar for your computer first. Why? I not sure why, but I trust that the Buildroot maintainers had a good reason for not relying on whatever version you already have on your computer.

Since I don’t use NixOS, I don’t know why those header files aren’t being found on your system. There are a couple NixOS users of Nerves and these instructions were contributed some time ago. Could you open an issue if they’re not working and I’ll try to get a hold of the NixOS users to help update them.

1 Like

Chiming in to say that I’ve been down the road of trying to get a custom nerves system to work within a nix-shell, but kept running into more and more path issues to once I got into the compilation.

I’ve used nix-shell (with the config from the nerves docs) successfully for compiling and burning normal nerves projects that rely on a pre-built system artifact. However, I ended up reverting to a docker-compose method for compiling custom systems from within NixOS.

It’s been too long since I’ve updated it, but here’s (an old version of) the starter repo I use for docker projects:

If you do decide to keep trekking toward custom system compilation from within NixOS or nix-shell, I’d be interested to learn how you get there.


Thanks for the feedback everyone!

@fhunleth :
Is “host-tar” actually tar for my build machine in buildroot parlance? As I understand it the standard terminology is host build or target where build would be my PC, host would be the Beaglebone, and target gets kind of convoluted - the “target” of a program I’m cross compiling (e.g. if I build GCC on my x86 PC to run on the BB and compile for RISC-V the host would be the BB, the build would be my x86 PC and the target would RISC-V. If buildroot follows a different convention that is definitely good to know.

The good news is I don’t think there is anything incorrect about those NixOS instructions, they provide a perfectly workable environment for building a nerves project based on a prebuilt Nerves artifact. Unfortunately, they aren’t sufficient to get a from scratch buildroot build though.


Appreciate the docker example - I may divert to that if I can’t get nix-shell to work. Based on @fhunleth’s input that I’m currently failing out at producing the toolchain, I’m going to try reverting to a standard nix-shellwhich sets up most of the build environment for me and manually symlink /usr/bin/file and if I can get that working then maybe I’ll try to figure out how to automate the /usr/bin/file symlink issue in the shell


In this context, the word host refers to your x86 PC. The word target is the Beaglebone. If you had a different board that had a RISC-V processor on it, the target would be for that board. Luckily (?), you haven’t gotten far enough for Buildroot to do anything for the target. Buildroot is trying to compile tar(1) for your x86 PC. That’s why it is called host-tar.

Also, I tried to not use the word build. My intention with the word build previously was as a verb meaning to compile and link the tar(1) program.

I hope this at least clears up the terminology.

1 Like

Understood, I’ll interpret those terms differently going forward - thank you!

Not really the central issue here, but to defend my confusion just a little this usage by buildroot does still seem to conflict with the prevailing usage I’m familiar with from gcc, Canadian cross, build systems I’ve used, etc.

Just to document where I currently am with this - I’ve still been struggling with this and have made only mixed progress. With a lot of effort and ugly hacks I was able to get NixOS to a state to build basic BBB images from upstream buildroot. Unfortunately when I went back to trying to build through mix with the same environment the extra packages enabled by nerves opened new issues. It was at this point that I decided I was playing a game of whack-a-mole that I was not going to win and started looking at other options…

My approach tonight was to manually turn on the nerves docker infrastructure from WSL and MacOS. Just to start from a working config, I compiled @amclain’s cockpit project he linked, and I modified his mix.exs nerves_package config with: build_runner: Nerves.Artifact.BuildRunners.Docker which worked (after getting my host OTP version to match his target version, and a couple other unimportant details). I like that this approach shares the upstream nerves build infrastructure. Its a little unfortunate that I have to reach into the dependency to make this change, but given that this is for custom images that seems a reasonable tradeoff.

The other thing I noticed is that Nerves.Artifact.BuildRunners.Docker is incompatible with alias docker=podman because of a version check mismatch. I know this is a little picky, but it would be nice if nerves could support podman, because then the nix-shell would only have to provide the podman executable to provide the full environment. Docker proper requires also enabling the docker daemon, which is a system level change. Is podman support something nerves would be open to? I could try to work on a PR.

I’ll work on cleaning up this current setup and putting it up on GitHub for people to take a look at if they are interested.