What's the cost of DB as a service?

Some background: I’m a solopreneur paying rent with what’s currently a Rails app on Heroku, and I’ve been about two months away from deploying a rewrite in Elixir/Phoenix for the past year. :+1:t2:

I’ve got Edeliver deploying to some DigitalOcean droplets, and I’ve been toying with some different options for where/how to host my Postgres.

Options I’ve considered:

  1. Rolling my own Postgres on another DO droplet
  2. Continuing to use Heroku Postgres (or some other Postgres as a service, Compose looks like a thing?)
  3. Just Heroku-ing the whole thing, even though I’d lose some BEAM-y goodness by doing so

I specifically wanted to test #2 (continuing to use Heroku Postgres) to see how much latency “cost” I’d be incurring by having my app servers and DBs not in the same “house”.

My test was seed script that adds about 60,000 rows of test data (roughly representative of one year’s worth of data for a customer of my company). Here are the results:

DigitalOcean droplet (NYC-3) to a DB on localhost
4 minutes and 43 seconds

DigitalOcean droplet to another droplet in the same region
6 minutes and 40 seconds

Digital Ocean droplet to Heroku Postgres
51 minutes and 57 seconds :man_facepalming:t2:

I wasn’t quite prepared for the magnitude of the difference. If the cost of DBAAS convenience is a tenfold increase in latency for my users, my inclination is to invest my time in running my own database. Would love to hear what choices others have made…

3 Likes

Digitalocean is coming out with their own DBaaS pretty soon. Might be worth going all Heroku for now then switching for the cost savings when it comes out. Unless you really need multiple servers from launch. In that case have you tried AWS and their hosted DBs or heroku again(which is on aws).

5 Likes

Very interesting! I didn’t know that DO had a DBaaS on the way! Link for anyone else interested…

7 Likes

I don’t really get DBaas, but why not just get a dedicated server and run your own Postgres? I do that with my Rails apps, and hope to be doing it with my Phoenix apps too :smiley:

2 Likes

The time spent on optimising your postgres config for the hosts they are on, setting up a backup system and regularly testing your recoveries, monitoring the postgres lists for security announcements, etc. To me that is too much hassle to save a few bucks a month. At least until you get up into the $XXXX level on a DBaaS which most people will never hit.

8 Likes

Never had a problem with any of that touch wood.

If you use something like CentOS (so not bleeding edge) you’ll most likely be fine as any security updates will be covered in yum update and it is known for being a solid/stable distro.

There is a great Ruby gem called backup that makes backups easy to things like aws too.

I wish more people would use dedicated servers, it’s not just the saving (dedicated servers were actually the most expensive option at one point) but it’s the freedom not just to grow your app but grow your family of apps - adding new domains/emails/apps etc all on the same server :slight_smile:

4 Likes

I think for me (perhaps uniquely, since I’m solo), having someone ensuring best practices for me is reassuring. I’m thinking specifically about security and backups…

4 Likes

Most Linux distros are ‘safe’ out of the box :slight_smile:

A quick google on ‘securing postgres’ and ‘securing Linux server’ should put your mind at rest (as there’s not that much to do).

Of course it all depends on what you’re currently spending and whether you want the freedom of a dedicated server - balanced with the extra work.

1 Like

Yeah, I’m thinking mostly fear of borking up pg_hba.conf. :slight_smile:

Why not use a control panel like webmin (or for multiple virtual hosts virtualmin)? They will configure most things for you and make it easy to make changes :smiley:

1 Like

Yeah these are things that aren’t problems until they are and when they are I’m real happy they are someone elses problems. Also if you aren’t spending time optimising your config you’re probably leaving a lot of performance on the table. Which admittedly for most of our projects probably doesn’t matter but it is nice to have it.

The backups gem only does pg_dump style backups so if you have a crash or someone makes a mistake you will almost certainly still lose data. Possibly a lot depending on how often you do snapshots.

I use to run all my stuff on dedicated servers but having each system split off into its own virtual server is just so much nicer for me. I don’t have to worry about a single point of failure, I can easily compartmentalise tools on the server, etc. Now with stuff like PaaS and DBaaS my life is even easier. I’m not paid to be a fulltime admin so the less of it I can do the better.

8 Likes

Just my two cents: when there is a problem, your clients won’t care whether it’s your direct responsibility or not, so it’s going to be your problem. So the real question is “will that subcontractor be quicker to repair failures than me? Especially during the night, weekend or national holiday”.

I don’t know about the cheaper options, but assuming they do their work fast and well, it’s great for the peace of mind indeed.

2 Likes

I would do that until DO’s DaaS is available. I’ve handled DB’s (devops role) and don’t want to roll my own DB for reasons mentioned. I’m also waiting for DO’s DB offering as I don’t seem to be among the lucky few to get early access :frowning:

2 Likes

My problem is dealing with my clients. Dealing with a screwed up DB server which was the topic isn’t my problem.

1 Like

I’ve applied across 3 of my DO accounts and no early access. Though I only got k8s early access like a week before it went GA :disappointed:

1 Like

Is it safe to say you would only really need to start optimising once your database got relatively large? Even MySQL has performed fine out of the box until getting to millions of records and lots of requests in my experience.

Do hourly backups?

What do DBaaS offer here that you can’t do yourself?

That’s probably the main factor here - whether you want to be doing all that or not :lol:

I feel lucky in a sense because years ago if you had a site that grew relatively large dedicated servers (and/or co locating) were the only real options. Having said that, I started off with managed servers (so the hosting provider managed the server) then moved on to un-managed servers… and I would recommend that path for anyone wanting to explore dedicated servers :smiley:

2 Likes

Is it safe to say you would only really need to start optimising once your database got relatively large? Even MySQL has performed fine out of the box until getting to millions of records and lots of requests in my experience.

It depends entirely on your application. For many though yes definitely even when they’re small the difference can be noticeable.

Do hourly backups?
What do DBaaS offer here that you can’t do yourself?

If you’re ok with losing up to an hour of your customers data then they are fine. If that isn’t ok then no. DBaaS don’t do anything I can’t but they can do it so I don’t have to. Stuff like setting up a separate barman server at another location so I can have point in time backups for example makes running my own just as expensive if not more(up to a point, at the higher end DBaaS get really pricey). Again this is just for me and my use cases. If some data loss isn’t critical(like for a blog where updates are infrequent, or a forum where losing an hours posts isn’t that big a deal) then it doesn’t really matter what you do and in those use cases I tend to just stick a postgres server on the same box.

I feel lucky in a sense because years ago if you had a site that grew relatively large dedicated servers (and/or co locating) were the only real options.

I remember those days. Having to go and swap tapes every afternoon in a freezing cold DC lol.

Even as recently as 10 years ago I was head of tech at an ad company where I had to manage a few dozen dedicated servers. Honestly I don’t think I’ll ever want to go back but I do admin out of necessity not enjoyment. If you enjoy the process then I envy you because it would certainly make parts of my day a lot more fun if I did too. :smile:

4 Likes

For larger companies running dozens of instances across multiple teams it gets pretty attractive. I know the backups will work. I know someone hasn’t opted our company into some postgresql extension that is cool but not ready for production level loads. I know that I can get pretty consistent monitoring capability across the teams. And minor upgrades are actually being done.

But If I go the DBaaS route then I’m losing flexibility and paying more. Definitely not necessary in every situation but I’m happy its available.

1 Like

I actually rather enjoy the admin work. ^.^;

3 Likes

It takes all kinds! :smile:

Honestly I really loved playing with servers and use to have quite the home lab full of linux servers(yes I ran a beowulf cluster on them), some sun servers, and a few random weird bits I found on eBay. Then I took a “programming” job with Morgan Stanley which ended up mostly being an admin job for their ETL systems and an in house Linux IaaS project to make use of the 4000 odd linux servers they bought during peak 2k linux mania in a move to go from mainframes to distributed(they kept the mainframes and had 1 or 2 DCs full of linux servers going unused) and since then I’ve really not looked forward to it.

1 Like