Hello,
I’m being curious about why Ecto needs explicit casting when passing arguments to a query. I faced the following situation:
amount = %Amount{value: 1, type: "spa"}
Iris.Repo
|> Ecto.Adapters.SQL.query!("insert into prospect_overview(amount) values ($1)", [amount])
The Amount
struct implements the behaviour Ecto.Type.
This example doesn’t work though, it require me to explicitly state the type:
amount = %Amount{value: 1, type: "spa"}
Iris.Repo
|> Ecto.Adapters.SQL.query!("insert into prospect_overview(amount) values ($1)", [type(Amount, amount)])
If amount was one of the basic types (string, integer, binaries, booleans, float and array of basic types), I don’t need to cast, however for everything else I need to.
While it totally make sense when using Ecto.Query DSL, since some things might need conversions (e.g. is_nil(something)
becomes IS NULL
in SQL, I can’t figure out why this is needed when passing params to a “string-based” query:
The struct can definitely implement some kind of protocol to request “auto casting” to the DB type, the basic types are detectable and covered. Shouldn’t the conversion be possible automatically?
This isn’t actually answering your question, but why are you inserting in this manner?
You could just…
%Amount{value: 1, type: "spa"}
|> Iris.Repo.insert()
I think OP stated that %Amount{}
is not necessarily an ecto schema struct, so there is no information about which table to insert it into for Repo.insert
to work.
1 Like
I think this is the source of the confusion.
When I read the Ecto.Type
documentation I’m seeing a way of defining a type converter for an existing type - you are treating Ecto.Type
as some kind of interface for the type to implement.
So given an existing %Amount{}
type, Ecto.Type
can be used to define an EctoAmount
converter module. With this more general use case there is no inherent link between Amount
and EctoAmount
so “autocasting” can’t happen without additional configuration facilities.
At this point type(^amount, EctoAmount)
is an explicit way to specify which converter to use to convert the amount
runtime data to an Ecto native type.
Having Amount
and EctoAmount
separated also means that Amount
is no longer coupled to Ecto which would generally be viewed as beneficial.
Thanks for the answers.
I don’t mind if Amount
requires Ecto or not in my specific case at least (type was created specifically to deal with a special postgres type).
I’m surprised though that there is no protocol to implement which would prevent having to specify the casting at all.
Not to mention, you can easily provide the “ecto support” at compile-time based on the presence or absence of ecto itself (or just as a flag really)
That’s about convenience.
I think the argument is that Amount
should be about the business rules surrounding the data. Meanwhile EctoAmount
is simply about converting to and from Ecto native types.
Yes, and it would also make your type feels less like “second-class citizen”. It works for some native types (not for map though) but it doesn’t work for your own types. That’s an inconsistency in my point of view.
For what is worth, you could even have a package having only the protocol, so you won’t depend on entire Ecto, and still being able to split Amount/EctoAmount.
For now, I’ll solve by providing a method that automatically casts all the arguments in a given list
Transparently [de-]serializing Ecto schemas is very possible since they are structs – which are basically maps.
However there’s the danger that Ecto can thus [de-]serialize a privacy-sensitive field like a password or a phone which you might not always want. So it opts for explicitness.
Also, your “entire Ecto” statement seems like a way to chase micro-optimization (and a premature one at that). Your app definitely won’t bloat due to Ecto and it offers a lot (like changesets / validation).
EDIT: If you need to work with arbitrary maps without promoting them to Ecto schemas you can always make a JSONB column in your PG database and add a field to an Ecto schema you have. Then you can unload pretty much anything in it.
I’m honestly not understanding what you are talking about:
- I’m not using Ecto Schemas in any of my examples
- I’m not the one who said that the depending on Ecto is a downside (I said “I don’t mind if my struct requires Ecto to work”)
- I’m not talking about deserializing, if anything I’m serializing into something that an SQL query can understand
- I’m not sure what’s the matter with passwords here, I’m talking about a developer explicitly passing params to an SQL query (provided as a string) and Ecto being able to convert those values into things that SQL can understand. So, an Ecto Schema wouldn’t fit at all in this context: that usually represents a table, here we are talking about a column if anything
Apologize if I’m getting it wrong, but I’m afraid there is a misunderstanding in what we are talking about
Sorry for misunderstanding. I got under the impression that you wanted Ecto to blindly accept arbitrary maps and inject them into SQL.