Hello,
I want to store a structured raw input in a JSONB column. This raw input is already in an embedded schema because it has been previously validated. However, the raw input can be of different type, some of the types being quite complex, as they have deeply nested structures.
When I had only 4 different types, I had a column for each one and I handled the polymorphism by myself:
schema "table" do
...
embeds_one type_one, TypeOne
embeds_one type_two, TypeTwo
...
end
But now, I want to add a lot more types, and I don’t think is efficient to have 30 different columns. So I would like to store the information in the same column, but when loading the record using a type
field, it would load one structure or another.
I’m trying to use Ecto.Type
to do this job, but I find myself doing a lot of work that was previously handled by Ecto
:
First, I have to @derive {Jason.Encoder, only: [...]}
in a lot of schemas. It feels wrong because I’m forced to use this protocol to store this information in one way. If I want to use the same protocol for another use, for example to encode in a JSON API, I won’t be able to do it in the future.
Also, I’m creating a lot of functions to load the raw JSON in the structures, basically using cast
and cast_embed
with all the stored fields.
So, I don’t know if I’m missing something, but I’d like to use functionalities that are already in Ecto
with embeds_one
to load
, dump
, embed_as
, etc. my custom Ecto.Type
. Do you think it would be possible?
Do you think is a good solution in general terms? Any advice?