Well, not precisely, saving it to an external DB means you are serializing it every time you convert it from erlang terms to DB data and vice-versa. Maybe you would be best served by storing it on an ETS, or, depending on the complexity of your use cases and the amount of actual data, even using the file at compile time to generate code, like it’s done for unicode stuff on elixir for example: elixir/unicode.ex at master · elixir-lang/elixir · GitHub
EDIT.: now if you know these 60K rows are a lot of info, and you are concerned about memory usage, than you have a stronger argument to go with a DB table for sure.
I’m not experienced enough to chime in on which approach is better.
However, if you decide to go with Option A and get an error talking about undefined_table or relation xyz does not exist then try adding a call to flush() between your table set-up code and population code.
For Glific (which we run as a SaaS), we keep the table creation and updates as part of the migration
Any data changes we do it as part of seeds. We started off using philcolumns (a hex package), but that seemed a bit cumbersome to manage multiple organizations in the SaaS, so we just write our own seeder functions and then call them directly from a remote console during deployment