Integrating Phoenix with existing database

I’m rather new to Elixir and Phoenix so the answer to my question might be quite obvious. I’m learning phoenix through a guide with a brand new project and have no idea about how to migrate my existing project with Node to Elixir, the problem arises due to an existing database with around 100 tables(all with the same schema), with new project we need to create migrations for each and every table and it’s quite hectic to do it for all the tables for my approach and also since all the 100 tables in my existing database is having the same schema is there any workaround to do it using a single migration or something similar.
I’m planning on using the parameters(URL params) through routes to select the tables depending on the selection of “table” by the end user on the UI and execute specific queries on that specific table itself using a single method/controller, what can be the best practice to follow through this approach?

I’ve done something similar with a PHP project with 100+ tables. You will not use every table at the beginning, for example, you might want to add feature by feature until you have a good workable project to deploy.

What I’ve done is to create a dump of the database (only table structure for example) and use mix ecto.load as the initial step. This can be useful for local development and testing.

Then you can create new migrations on top. Please only create migrations on the Elixir side and not in the Node, or you should update the structure.sql that is used to load the initial database.

2 Likes

Without any context, 100 separate tables that all have the same schema and are operated on the same way sounds a lot like one table with that same schema plus a selector column.

1 Like

It’s actually time-series data and I want all the data to be stored in a different table depending on the project requirements for the sake of simplicity and feasibility, there are quite a few complex queries to overcome with the project and I don’t want to flood a single table with a lot of data.

I’m a little bit confused with this quote… can you please elaborate on what exactly you mean by the word “top”

Sure, so you will have 2+ files inside the priv/repo.

structure.sql- With the dump from the database.
20220715_....exs - With the migration you want to perform.

When you test or use you’re local environment you need to run mix ecto.load and then mix ecto.migrate to apply the migrations. Don’t execute mix ecto.load in production, might give you some issues (it should detect that already exists data there). When you deploy a new version to production you only need to run the migrations.

1 Like

but still, here I need to generate migrations for each table, I still do not understand how come this is beneficial with the above approach, is there anything I’m missing or not catching up to?, the migrations will be done on a specific table and I won’t be able to perform queries on other tables unless I generate migrations for them as well.

I think you’re confusing migrations with schema definition. You will need to create the schema for each table to be able to query the tables, but you don’t need the migrations.

EDIT: Looks like you don’t need to set up the Schemas, check out this. You can use a string to mention the table and define the fields you want to return.

2 Likes

Thanks a lot, the “EDIT” part is what I was looking for