In Programming Phoenix Chris writes “you could let database manage all the code that touches data. Through the use of layers such as stored procedures, this extreme approach will lead to excellent database integrity but is difficult to maintain and harder to code. This approach was famous just before the turn of the century, but not many people advocate using it anymore”
I personally actually like this approach it very flexible decouples schema from your application, you can grant only execute permission to the web app user making things much more secure. What do you guys think?
I personally think stored procs are better, but it is amazing the number of places that do not hire DBA’s (such as here at mine), and without a DBA then making stored procedures ‘well’ is amazingly difficult. It is too easy to introduce security issues, inefficiencies, etc… otherwise.
“before the turn of the century”!? Gee that makes me feel old, but, yeah, I’ve worked on a couple of projects like you describe. As @OvermindDL1 pointed out, good stored procedures require a separate skill set from good elixir/OTP coding (or insert other application language of choice). While I agree in theory it can result in a better decoupled and performant system, in practice it doesn’t work out well because either the team needs broader than usual skills, or you have a separate team (and tickets) for stored procedures creating an organizational mismatch, and a management pressure to work around the slowness of the other team. Which results in a more brittle system overall.
My approach these days is to remain open to the occasional stored procedure if/when profiling shows a bottleneck that can be eased by them. Prepared statements are another option in the middle ground that were popular in Java. It would be interesting to look into Ecto support for them and/or how hard they would be to add.
There has always been a need to put something between raw data and UI, the question is always “what”. You could rely on some sort of persistence framework or other O/R mapping solution, but frameworks have proven to be more transient than stored procedures, meaning custom solutions are more future proof, whatever they may be.
The concept is still alive and well and the most popular modern solution is micro services, which essentially mirrors that of stored procedures going after data. Stored procedures are a more efficient way to manipulate and aggregate data when working with multiple database tables in a single transaction because the language is optimized for data, and avoids network round trips, so it’s not unreasonable to use stored procedures as a back end to web services today. It does require another language in the stack, but the days of single-language applications are long gone. There is a risk that stored procedures get over utilized, but those decisions are left up to seasoned application architects.
That’s the long way of saying yes it still should be a valid choice that is often overlooked, but the other trend today is purposefully decoupling the data layer from the application to enable flexibility to swap into the next latest and greatest database technology. Such future-proofing automatically eliminates significant stored procedure use, so it all becomes a tradeoff. Keep in mind that popular databases tend to be around for decades.
As a side note, DBAs are traditionally the equivalent of system admins who have little software development background, so I’d be a little careful assuming a DBA is needed for stored procedures. On the other hand I do see a common trend to not hire anyone with significant database experience. When no one on staff is any good at database design and optimization, you’re bound to run into tuning issues at a minimum, or significant refactoring at worst (and I’ve seen application refactoring costs in the 10 figure range).
The SQL adapters use prepared statements for all versions of Ecto to take advantage of parametrisation. In Ecto 2.0 SQL adapters will use named prepared statements and cache information about many queries so they can be executed again as efficiently as possible. This is an implementation detail and not exposed to a user of Ecto.
For what it’s worth, there are 4 major SQL databases and most advocacy to be able to swap them out is due to:
Oracle - Cost
SQL Server - Cost
MySQL - Data integrity, lack of features, issues
Postgres - …
There’s really no incentive or need to migrate from PG for anything but the most niche use cases. Based on that, I’d advocate for using your DB features when they make sense. There’s little need to keep your options open when you chose the right tool.
Hmm…Sometimes choosing the right tool is difficult, and how would you know if you chose the wrong one ahead of time? You wouldn’t have chosen it!
I’m not advocating for or against Postgres. I’m currently using it (and Ecto) to reduce the barrier to entry with the Elixir ecosystem and it’s doing a great job. Also I love Ecto’s pipe syntax, as I to LINQ…but I digress. I came across this article about why Uber switched from using Postgres awhile back, and it was an interesting read.
In RDBMs you do not have that much choice and for a new option to become viable it takes at least a decade more likely more. To be affected by that particular issue (write amplification for some updates you pretty much have to be at Uber scale) to care. Uber switched to basically custom noSQL solution running on top of MySQL, which just reiterates the point made above [quote=“andre1sk, post:12, topic:1629”]
if one outgrows PG the only options (aside from sharding PG) are Cassandra and the like
Great discussion link! I’m not a DB expert, far from it. I have been in the world of DB optimization a couple of years ago, but there was never really a good solution (they were using Oracle). But it looks like that thread has mixed responses, and that the issues mentioned in the Uber article are indeed known issues, and thus I would not see it as a black & white “bad design” on their part. (Though the migration thing was a bug and couple have happened to any system, as when first reading the article, I thought it was a bit cheeky for the Uber people to have mentioned it in the first place.)
But even if it was a bad design, then that still lends credence to the point I was trying to make - you wouldn’t know that it was a bad design! I try my best, but of course years later when I look at previous code, I go…“ugh, did I do that?!”
I’m just saying I think keeping options open is usually a good thing, and that like most things, the opinion on a SP vs whatever else lies with what you have with requirements and assets: what do you have to do, what/who do you have to work with, and what is your expected timeframe / lifespan of the codes.
Keeping your options open is one of those things that sounds good in theory but usually means you are committing to something else. Not committing to your DB tends to mean committing to you server code.
That’s usually how monoliths are created.
The Uber article generated a lot of discussion and their problem case is one I’d qualify as a niche case. It’s also one that could have been solved with PG with an adjustment to the design.
A database like Postgres can fundamentally alter the way you build applications because it can do so many things well. Keeping your options open with Postgres is on par with using a Ferrarri just to get groceries.