I am considering using Amazon Aurora PostgreSQL Limitless Database in my next project. There will be very large amounts of data and this solution seems to handle built-in most of the scaling and sharding concerns.
My questions:
- Any experiences doing this in a Phoenix / Ecto based application ?!
- Can regular migration files be used ?!
- Any general thoughs on this ?!
Thank you.
I have not used Aurora, but I believe they are quite compatible with Postgres (last I remember it’s a fork with a new backend - maybe different now), so you should be able to use Ecto with no issues.
I do remember seeing a post on here about weird latency characteristics with Oban using Aurora - I don’t remember if they sorted it out or how big of a deal it was, but you could search for it.
In general, I personally would avoid going to fully managed “customized” solution as it could result in vender lock-in, which is kind of AWS’s whole business model.
It usually costs very little when starting, and it might actually cost them money to provide you the service, but ends up costing a whole lot more if you get to scale pass a point, where they end up getting back their money.
That being said, if you don’t plan to scale past a certain point and is not worried about possibly having to migrate to another service when they inevitably increase their pricing, you should be good.
6 Likes
I agree about AWS/Cloud in general but this is managed Postgres so you can always take your backups and run. Postgres is quite possibly the only database left that we can say for certain won’t be rugpulled, which is why we all use it!
3 Likes
@VictorGaiva … I agree, seems better to implement and manage sharding at application level so I can avoid vendor lock. This way, I can still use regular migration commands for Ecto.
Besides that, to even start using the Amazon Aurora PostgreSQL Limitless Database, there is a minimum requirement of running 16 ACU / Hour service, which is around 2,200 USD / Month! To just start using managed sharding 
2 Likes