I’m learning Haskell at the moment and contrasting its semantics with Elixir as I go.

In Haskell, if I create the following naive & inefficient prime number checker:

```
factors n = [x | x <- [1..n], n `mod` x == 0]
prime n = factors n == [1, n]
```

and run it against a large number like `10^9`

, it returns `False`

immediately thanks to lazy evaluation as soon as a mismatch occurs.

```
> prime 1000000000
False
```

If I do the same in Elixir, the eager evaluation will try to compute all the factors of `10^9`

first, which is very slow with this implementation.

```
factor = &(for x <- 1..&1, rem(&1, x) == 0, do: x)
prime = &(factor.(&1) == [1, &1])
prime.(1000000000)
# hangs
```

The two evaluation strategies make sense to me but I somehow still find it surprising that the BEAM runtime can’t optimize away the matching against 2-element list and instead carries on the computation past the (sufficient) first 2 evaluations of `factor.(1000000000)`

.

I guess despite the eager evaluation, I was still expecting some sort of treacherous optimization from the BEAM

Can someone enlighten me as to why this kind of constrained optimization can’t or doesn’t happen in languages with eager evaluation? Thanks!