I work on a system where customers are allowed to dynamically query their own data and we use Ecto to build these dynamic queries. One thing that I noticed recently is if I have table with an integer columns, and the customer tries to filter it with something like my_integer > 14.5
I get an ecto error:
** (DBConnection.EncodeError) Postgrex expected an integer in -9223372036854775808..9223372036854775807, got #Decimal<10.0>. Please make sure the value you are passing matches the definition in your table or in your query or convert the value accordingly.
This error makes sense to me in general, but Postgresql does allow these kinds of filters. For example, I made a test table with some rows that contain 1
or 2
in the integer column, and I can query it on psql like this:
postgres@[local]:5432/intdec# SELECT * FROM test_table WHERE integer_col <= 1.85;
βββββββββββββββ¬ββββββββββββββ
β decimal_col β integer_col β
βββββββββββββββΌββββββββββββββ€
β 1.00 β 1 β
β 1.10 β 1 β
β 1.50 β 1 β
βββββββββββββββ΄ββββββββββββββ
(3 rows)
Postgres itself is not just casting to an integer, itβs actually implementing some intuitive behavior where it handles >
and <
differently. So Iβm wondering why Postgrex does not allow this kind of query to be executed?
Looking into this a bit further, it looks like this behavior isnβt the same for prepared statements. The prepared statements seem to do a simple βroundβ to an integer value and then filter using that integer:
postgres@[local]:5432/intdec# prepare foo as SELECT * FROM test_table WHERE integer_col <= $1;
PREPARE
Time: 0.654 ms
postgres@[local]:5432/intdec# EXECUTE foo(1.45);
βββββββββββββββ¬ββββββββββββββ
β decimal_col β integer_col β
βββββββββββββββΌββββββββββββββ€
β 1.00 β 1 β
β 1.10 β 1 β
β 1.50 β 1 β
βββββββββββββββ΄ββββββββββββββ
(3 rows)
Time: 0.688 ms
postgres@[local]:5432/intdec# EXECUTE foo(1.9);
βββββββββββββββ¬ββββββββββββββ
β decimal_col β integer_col β
βββββββββββββββΌββββββββββββββ€
β 1.00 β 1 β
β 1.10 β 1 β
β 1.50 β 1 β
β 1.90 β 2 β
β 2.10 β 2 β
βββββββββββββββ΄ββββββββββββββ
(5 rows)
Time: 0.466 ms
That seems to match what I read on this issue suggesting a better error message Feature Request: More descriptive error message when a value cannot be encoded into the database Β· Issue #562 Β· elixir-ecto/postgrex Β· GitHub. Prepared statements just define a type for each argument, and then the client must send that type for that argument.
You can work around that by typecasting the parameter. Instead of passing in just the value the user provided you could pass in dynamic(type(^value, :integer))
vs. dynamic(type(^value, :float))
depending on what you got. It means you need to know up front what values you accept, but you at least donβt need to depend on the db guessing.
2 Likes
Thanks @LostKobrakai for the suggestion. Typecasting the decimal into an integer would work, but I think it can create some surprising result for instance my_integer < type(1.4, :integer)
would not match rows with my_integer = 1
because it typecasts the decimal to 1
and then it ends up checking 1 < 1
and rejecting the row.
But I think I can do type(my_integer, :decimal) < Decimal.new("1.4")
and that I think that will work intuitively for equals, less than and greater than, etc comparisons.
I wonder if itβs because Postgrex is using the postgres binary protocol for communication, and so itβs going to need to be strict about how to encode types?
I wonder if itβs because Postgrex is using the postgres binary protocol for communication, and so itβs going to need to be strict about how to encode types?
Thatβs exactly it. EXECUTE (...)
is automatically casting it. Postgrex tends to be more conservative.
3 Likes