Nx.LinAlg.solve basic case error?

Trying to implement projective transform in elixir.
I ported an algorithm over from another project:

defmodule ProjectiveTransform do
  def make_transform(
    [[x1, y1], [x2, y2], [x3, y3], [x4, y4]],
    [[xx1, yy1], [xx2, yy2], [xx3, yy3], [xx4, yy4]]
  ) do
    make_transform(
      [x1, y1, x2, y2, x3, y3, x4, y4],
      [xx1, yy1, xx2, yy2, xx3, yy3, xx4, yy4])
  end
  def make_transform(
    [x1, y1, x2, y2, x3, y3, x4, y4],
    [xx1, yy1, xx2, yy2, xx3, yy3, xx4, yy4]
  ) do
    a = Nx.tensor([
      [x1, y1, 1, 0, 0, 0, 0, 0,   -xx1, 0, 0, 0],
      [x2, y2, 1, 0, 0, 0, 0, 0,   0 ,-xx2, 0, 0],
      [x3, y3, 1, 0, 0, 0, 0, 0,   0, 0 ,-xx3, 0],
      [x4, y4, 1, 0, 0, 0, 0, 0,   0, 0, 0 ,-xx4],
      [0, 0, 0, x1, y1, 1, 0, 0,   -yy1, 0, 0, 0],
      [0, 0, 0, x2, y2, 1, 0, 0,   0 ,-yy2, 0, 0],
      [0, 0, 0, x3, y3, 1, 0, 0,   0, 0 ,-yy3, 0],
      [0, 0, 0, x4, y4, 1, 0, 0,   0, 0, 0 ,-yy4],
      [0, 0, 0, 0, 0, 0, x1, y1,    -1, 0, 0, 0],
      [0, 0, 0, 0, 0, 0, x2, y2,   0  ,-1, 0, 0],
      [0, 0, 0, 0, 0, 0, x3, y3,   0, 0  ,-1, 0],
      [0, 0, 0, 0, 0, 0, x4, y4,   0, 0, 0  ,-1],
    ])
    v = Nx.tensor([0, 0, 0, 0, 0, 0, 0, 0, -1, -1, -1, -1])
    x = Nx.LinAlg.solve(a, v)
    Nx.reshape(Nx.concatenate([x[0..7], Nx.tensor([1])]), {3,3})
  end
end

Then wrote a simple test


defmodule ProjectiveTransformTest do
  use ExUnit.Case, async: true
  use ExUnitProperties

  alias StreamData
  import TestHelpers

  test "single hardcoded transformation case" do
    assert ProjectiveTransform.make_transform(
      [[0,0], [0,10], [10,10], [10,0]],
      [[0,0], [0,10], [10,10], [10,0]]
    ) == Nx.eye(3)
  end
end

And when I run mix test I get the following error:


  2) test single hardcoded transformation case (ProjectiveTransformTest)
     test/object_tracking_test.exs:57
     Assertion with == failed
     code:  assert ProjectiveTransform.make_transform([[0, 0], [0, 10], ~c"\n\n", [10, 0]], [
              [0, 0],
              [0, 10],
              ~c"\n\n",
              [10, 0]
            ]) == Nx.eye(3)
     left:  #Nx.Tensor<\n  f32[3][3]\n  [\n    [1.0000011920928955, -2.006071753157812e-7, 1.415989459019329e-6],\n    [9.025153069330827e-9, 1.000001072883606, -2.960541394259053e-7],\n    [8.9426798410841e-8, 5.948747983097746e-8, 1.0]\n  ]\n>
     right: #Nx.Tensor<\n  s64[3][3]\n  [\n    [1, 0, 0],\n    [0, 1, 0],\n    [0, 0, 1]\n  ]\n>

For comparison, the same exact thing in Julia:

julia> make_perspective(((0,0), (0,10), (10,10), (10,0)), ((0,0), (0,10), (10,10), (10,0)))
3×3 SMatrix{3, 3, Float64, 9} with indices SOneTo(3)×SOneTo(3):
 1.0  0.0  0.0
 0.0  1.0  0.0
 0.0  0.0  1.0

It looks like it’s “close” to zero but why e-7?

There are 2 issues at hand here. Firstly, your equality check is going to fail because you’re comparing an s64 tensor to an f32 tensor, and those will always be different Elixir-wise.

Nx unit tests use an assert_equal helper that you can yank, or better yet, assert all close.

Now for the precision issue. You’re dealing with floating point numbers, and the first difference I spot is that in Julia you used f64, while in Nx you used f32. That by itself is already a source of significant difference. Aside that, Julia might be doing some rounding in the results, especially in the underlying mechanisms of the linear system solution (e.g zero-rounding in PLU or QR decomposition before doing triangular solves)

2 Likes