Is there any way to verify SQL to be executed for a migration before it is actually executed? Looking at Ecto.Migrate Task documentation
--log-sql dumps the SQL that was executed for the migration, but I see no way to actually do a dry-run and dump the SQL that would be executed in order to manually verify the changes before the actual execution.
The reasoning for this would be to be able to dry-run the migration in the deployment pipeline, with a manual verification step for the SQL to be executed form the DBA before actually applying any changes and deploying the application. Something similar to what Doctrine’s
--dry-run option provides.
I guess this is not available, because there are parts of the api, which make this really difficult:
execute(fn -> repo().update_all("posts", set: [published: true]) end)
This code has no idea if it runs as part of a dry run or not. And while there are means of getting the sql for the executed code, those cannot be applied in an automated fashion.
Using the transaction of migrations is also only half a solution, because some changes to a db cannot be done as part of a transaction (e.g. creating a new database in pg).
What stops you from running the migration like this?
MIX_ENV=test mix ecto.migrate
And then inspect your test DB?
The intention is to be able to verify changes that would be applied to the database as a manually approved step in the pipeline, so the
dry-run migration should be executed against the production database.
The manual approval is put in place to prevent lengthy or blocking migrations for a huge production DB, and have the feedback of the DBA directly in the deployment pipeline for any improvements that could be applied to the migration before actually executing it “blindly”.
I am not aware of any tool in any language I’ve worked with that offers a dry run on migrations (I have been around a lot but that’s no guarantee, of course). You’d likely have to resort to some of the DB diffing tools and try and automatically parse and/or approve the SQL diff. Not an easy feat, but not impossible either.
Would be happy to be proven wrong by somebody else around here though. A migration dry run could be very useful.
I already mentioned the one Doctrine has in the PHP world, and Flyway in the Java world also offers dry-runs. I agree it is a very useful tool to have
I was not clear enough, sorry. I used Doctrine’s tool a while ago and it struck me as only working on the DDL level. Am I remembering wrongly?
I believe @LostKobrakai’s point for batch updates as part of migrations still stands – this cannot be captured by a dry run migration tool, it seems.
Apparently, what Flyway does is setting up a read-only connection to the database, assessing the migrations that need to be applied and generating a SQL file containing the statements that it would have executed in a regular migration. I haven’t tried (yet) to run migrate with a read-only connection and the
--log-sql option, though I wouldn’t expect this to work.
If there is no way in Ecto to do a dry-run migration, the only way I can think of for getting something similar to what we expect is to run the migrate task with
--log-sql in a DB replica and inspect the result, which is clearly suboptimal and might lead to sync issues between the replica and the actual production database. Any other alternatives?
Why is it suboptimal? Your desired solution seems wonky, namely this:
How is that verified?
I’d not sleep well at night knowing that my production deployment strategy includes an external tool’s assumption on how would the real DB execute a multi-step SQL script. Or am I misunderstanding and Doctrine’s tool does in fact use the real DB in a dry run mode somehow?
Furthermore, how would that put things out of sync? You are talking about a CI / CD step and those checks are usually queued and don’t run in parallel. Even if somebody else merges a PR while the current check is running that should wait until the previous check is completed, no?
Apologies for starting to question your approach but thinking of it – and with what I currently know – I’d never green-light it. I would execute the migrations on another DB entirely, make a full SQL schema diff with production and mark that diff as okay or not-okay. If it’s okay, then execute it against the real DB.
But maybe I am missing something.
No need to apologise, I just think you’re making many assumptions along the way and I’m not sure if it is the reference to Doctrine what is misguiding you here. Did you read the Flyway doc I linked?
It is in no way an external tool assuming how the database would treat the SQL script, but the migration tool itself generating the SQL script it would send to the database. And an actual human whose main job is to know the DB to the guts checking the generated SQL script to make sure a 4TB production database is not locked for hours just because someone added a column in the wrong position or changed type for a column without a proper non-locking strategy. You could even let the DB administrator run the SQL script manually if you have restricted access to production databases, which is not at all uncommon for big enterprises.
I guess you can understand why replicating a 4TB production database just to test migrations is suboptimal, both from a time and cost perspective. You are right, job execution in pipelines is usually sequential, and unless you are breaking DB isolation from multiple applications, in which case you have bigger problems, you should be safe on that regard. But maintaining and keeping in sync a DB replica that is running and rolling-back migrations with a production DB that is high-traffic and high-volume is the most suboptimal thing I can think of compared to just generating a SQL migration file from the DB migration tool, verify its contents and let the pipeline continue and apply the migration if everything look ok.
Nope, only read on the Doctrine tool, not Flyway. That might be the problem.
Well, you make solid points. Not sure what I can advise in that case.