I’m using Elixir 1.18.4 with OTP 27 (I’ve tested this across multiple versions of both Elixir and Erlang), and surprisingly, the expression nil > 25 returns true. Is this correct?
Both ChatGPT and Claude stated that this should not be possible — that a comparison like this should raise an ArgumentError, as nil cannot be compared to numbers using > or <. They even suggested that something might be wrong with my machine.
Yes, comparison operators can be used across types. As for why nil > 25 returns true, see the good ol’ docs. Of note: nil is simply syntactic sugar for the atom :nil (you can prove it with iex> nil == :nil).
I see. I’ve been programming in Elixir for about 10 years, and I had never run into this “behavior” before—perhaps because I always used is_nil/1 or is_number/1 in guards. But today, my tests started failing after I added a guard like when ver > 25.
What surprised me is that two AIs insisted it couldn’t be true. ChatGPT even gave me several links “proving” it would raise an error, but those links pointed to Elixir pages that didn’t actually mention anything related to this specific case.
You sound like one of those people who actually know how to use a dynamic programming language properly Also, we’re in the BEAM world here so I believe you meant “behaviour”
As grumpy old man jumping on the opportunity to rant this has been fairly common recently, “common” meaning this is the ~7th time in a couple of months I’ve seen a question online where people said they just asked AI, and not just for Elixir questions! I’m sure it’ll all be “fixed” soon enough, but I still enjoy reading docs.
Back on track, another little gotcha is that true and false are also sugar for :true and :false, so when you throw nil into the mix and compare them it’s done so alphabetically. I know a couple of people who got tripped up by this because they assumed that since true is greater than false that false would be greater than nil, but it’s not. Though again, if you aren’t abusing dynamic languages then you’ll never run into this
The LLMs appear to have transposed behavior over from Ruby, FWIW:
irb(main):001:0> nil > 25
Traceback (most recent call last):
4: from /usr/bin/irb:23:in `<main>'
3: from /usr/bin/irb:23:in `load'
2: from /Library/Ruby/Gems/2.6.0/gems/irb-1.0.0/exe/irb:11:in `<top (required)>'
1: from (irb):1
NoMethodError (undefined method `>' for nil:NilClass)
As you’ve noticed, the BEAM instead defines a total order (TLDR “everything compares to everything”).
This is mostly useful for code that handles opaque terms, like ETS’s ordered_set. The rest of the time you usually want to know (either by guards or by correct code) that you’re comparing terms of the expected type.
There’s a great writeup in The BEAM Book, including the rules for all the other kinds of terms.
software developers in particular are prone to being convinced by these hazards and few in the field seem to have ever had that “oh my, I can’t always trust my own judgement and reasoning” moment
I no longer trust LLMs with quantitative questions that have objectively true answers. They’re just not so good at that, nor will ever be perfect. They will though, be as good or better than a human, which is a good goal.
What I’m seeing now is AI programmed to come up with a “correct” answer in tightly confined answer spaces, giving confidence scores along the way.
While there is no runtime warning for the reasons explained above, it’s worth noting that the new type system might be able to catch it and warn at compile time when it is sure types are disjoint:
comparison between distinct types found:
nil > 25
given types:
nil > integer()
While Elixir can compare across all types, you are comparing across types which are always disjoint, and the result is either always true or always false
I don’t see why this would be surprising. But then again, there have recently been quite a few topics where people couldn’t believe an AI would be wrong, even assuming there was a serious bug deep inside the BEAM.
Meanwhile, my only experience with LLMs has been frustration and wasted time. They complain about problems that don’t exist, offer solutions that don’t work and always assure me that I’m right, even when I present mutually exclusive options and eventually find out for myself that neither of them is correct. I can’t remember a single occasion where an AI helped me solve something that wasn’t already trivial.
Sorry for the off-topic rant, I’m just becoming more and more irritated by AI.
Look, I wasn’t using LLMs for anything — I actually wrote code with the behavior in question and simply didn’t remember seeing it before in Elixir (but after @sodapopcan’s reply, I remembered that nil is an atom — I had just forgotten).
So this wasn’t a case of “vibe coding.”
I asked two LLMs about it. And they were both very convincing in confidently asserting that the code should raise an error. Attached is ChatGPT’s response (it’s in Portuguese, sorry).
P.S.: I’ve been programming since 1999, and so far I haven’t been able to use the cursor for anything beyond < TAB > helping me type less. All of my attempts to use an LLM to build something more complex ended in disaster.