Gigalixir: Platform-as-a-Service designed just for Elixir/Phoenix

Thanks for the thoughtful feedback, @cgraham! I get what you’re saying about using AWS, but since Gigalixir and Heroku are almost identical in price, is there a reason to choose Heroku over Gigalixir for an Elixir project?

The biggest reasons are:

a) it is reputable and has a long history/track record so I know it is going to be there tomorrow. (It is owned by Salesforce too so I know it won’t go bankrupt).
b) it already has a bunch more functionality in other areas (databases, redis, analytics, etc…) and a bunch of partners for integration
c) it has a free and hobby tier for me to get started.
d) I am used to it. I have an account. I know how to manage it, etc…

What I gain from switching to gigalixir is nice-to-haves but not so critical I would switch. Maybe for a new project? But then I would want to start with something smaller than $25pm.

Hope that helps!

3 Likes

Thanks, @cgraham! Very helpful.

There is an interesting post on HN today -The Cost of Doing Data Science on Laptops about the benefits of the cloud.

Sometimes it is not practical for developers and other users do stuff on their local machines. @OvermindDL1 you’ve acquired a lot of experience managing things yourself but some companies would rather depend on PAAS than depend on scripts and systems setup by their own staff who they are not ready to manage or even keep for the long term. Unless the costs warrant managing their own rented or colo’d servers they prefer PAAS.

Having someone else to blame when things go wrong is an also advantage if you are not so sure about managing stuff yourself.

I am surprised about how many sysadmin and developer jobs currently advertised require Amazon, Azure etc skills, given that I was an early beta tester of Amazon back in 2006 and I didn’t think much of it. It shows how much things have changed.

1 Like

I get that the price is identical to Heroku, but I’m not sure that this is the right market. Elixirists don’t want to pay $25 a month for 0.5GB of ram and only half of a CPU. Our apps are made for multiple cores. Offering half a core is like a slap in the face.

Right now I can spin up a DigitalOcean, or AWS instance with 2GB ram, 2 CPU cores, 40GB SSD for $20 a month. I can get the same PaaS security benefits by spinning up a predefined secure environment using Terraform or something like it. At those specs on gigalixir it’d cost me a 100 a month and I wouldn’t get the freedom to make changes or update the vm my services are running on. So for $100 a month, I can have 5 AWS/DigitalOcean instances running and have 5 times the availability.

Opps, I think I said services there. I may be wrong, but with Heroku and gigalixir aren’t you tied to one service at those specs?

Edit: I also don’t know of a single developer that uses Heroku for anything other than a test bed. It’s just too expensive for the little bit that you get.

1 Like

raises hand

I’m only one guy running the tech side of a startup, who can get as technical as I need to, but really don’t like spending my time on the devops side. Yeah, heroku’s overpriced if you heavily discount the value of your time, or you’re really strapped for cash, but otherwise its a great compromise.

I’m going to be giving this a shot in my staging environment, but one thing about the pricing is that heroku is actually cheaper once you get into the performance L dynos ($500 for 14.5GB + 46x compute share, dedicated machine, vs 10GB + 10x CPU share). I do question whether the compute share is apples to apples, but more importantly, is there an option at which you get a dedicated machine to run on with gigalixir? Just anecdotally I’ve seen pretty significant performance difference between shared and dedicated.

@jesse Have you considered a model where you provide the configuration around the customer’s own google cloud account instead of billing through your own?

1 Like

Your points are well received, @jeramyRR. Heroku and Gigalixir are definitely more expensive, that is for sure, but then again so is Digital Ocean and AWS compared to bare metal. The question is whether the extra cost is worth the time and effort you save, and that’ll depend on each person’s situation. I’ve used Heroku, AWS, and bare metal in the past and can say for sure that there are situations where each makes sense even outside of test beds.

Gigalixir offers less than 1 core at the low end which may be enough for some apps, but you can have more if you like. I’d hate to restrict people to multiple cores when they don’t need it.

There seams to be a lot of confusion vCore or whatever the term AWS uses is not a real CPU core it’s a Hyper-Thread so AWS is selling 4 “cores” based on 1 real CPU core.

1 Like

It can be hit or miss with AWS. Sometimes when you get a 2 vCPU vm you’ll get a 1 core with 2 threads, or you might get a 2 core with 2 threads. It’s very odd how they determine what you’re going to get.

1 Like

@karmajunkie, so far we’ve tried to keep things simple and homogeneous, including pricing, but we’re definitely working on 1) adding dedicated machines and 2) different tiers of pricing.

In the meantime, I can probably get you what you need manually. I can put a permanent percentage discount on your account if you plan on using a lot of resources. Say 30%, to bring a 14G instance down from $700/mo to $490/mo. I can also manually add some Kubernetes taints to dedicate nodes for your pods. Email me at jesse@gigalixir.com and we can coordinate a little. This also applies to anyone else who might be interested during the beta.

I’m also working on some benchmarking since it’s so hard to decode the difference between CPUs, cores, vCPUs, CPU shares, etc.

We haven’t considered an “enterprise” version of Gigalixir which runs on the customer’s own infrastructure, but if there is demand for it, we’ll definitely consider it. Is that something you’d be keen on buying?

Okay so I am using Heroku for a lot of Elixir/Phoenix stuff. We also use it for Ruby, it is the default platform for both solutions at the moment in my company.

The reason behind using Heroku is that deployment is simple, and we do not have to care much about maintenance. We are developers, we don’t want to install/configure security updates and fiddle around with firewall.

The deployment configuration with ansible/capistrano/ec2 etc - we also do it for some clients. Usually at certain time of life of project, we choose to migrate from Heroku because it either gets too expensive, or limiting (e.g. no filesystem access). Then we choose EC2, or other services and do the deployment architecture set up, which is quite a lot of work to be fair.

Heroku allows us to skip all the work we’d have to do with say capistrano or ansible, and focus on building application. We don’t have to care to write special code to compile assets, run migrations etc - this is happening automaticaly with connection to our CI server.

So, I am having a look at your solution, and it would sort out some problems we have indeed. Heroku shuts down/restarts dynos as they please, the storage is being cleared out etc, and we can’d do clustering at all (unless on Enterprise solution I think).

But if we do have to mess around with distillery/exrm, write special code to run migrations etc, this is actually more work than we would want to do. Ideally, our CI server would deliver code with git push, and your service would take care of building a release, including assets, doing an update to the system and running migrations. I don’t care much about hot code swapping either.

So, if I think about ideal platform for hosting Elixir/Phoenix, this is Heroku with dynos that we can cluster/name/assign ip, plus persistent storage we can use.

3 Likes

Thanks for the detailed feedback, @hubertlepicki! I’ll start working on a deployment option for Gigalixir that doesn’t require distillery and allows access to mix commands like mix ecto.migrate.

I’ll also investigate adding a persistent storage option. Would it be sufficient if all your replicas shared the same storage (say over NFS) or do you need every replica to have its own separate storage? The latter makes things significantly more complicated, but is possible.

Just to be clear, it sounds like Gigalixir’s current clustering solution works for you? Or do you need control over the node name and node ip address?

@hubertlepicki, one other thing I should mention is that if you’re starting a new project, the gigalixir-getting-started repo already has distillery, migrations, clustering, etc set up for you so you won’t have to do any messing around. Hopefully, you’ll consider gigalixir for your next new project?

I don’t think you should be making decisions or add features based on one person needs. So I’m just one person. I think you should figure out by the feedback what’s best for your business. I am just showing my point of view, I may as well be alone ;).

Thank you for the link, it looks pretty cool :slight_smile:

1 Like

The real value prop for this at the moment would be to be able to deploy it within a known ecosystem IMO. One of the perks of Heroku is that it’s in the AWS east region so I can easily provision AWS services and utilize them with my Heroku instance in network.

You offer a service like this that works in Google Cloud, AWS, etc and suddenly you value proposition goes from “your services minus everything that those guys have” to “everything those guys have plus your services”.

I’ve got a couple thousand worth of digital Ocean credit and I’m mostly sitting on it because I don’t want to take the time to manage everything else right now, just as an example.

1 Like

@brightball, great point. Gigalixir is actually running on Google Cloud’s us-central1 region in the same way that Heroku is running in AWS’s east region so if you provision anything in Google’s us-central1 region, they’ll be on the same low-latency network.

1 Like

Good to know. You should definitely make that clear in your advertising.

2 Likes

Is as opposite to a perk as one can imagine :slight_smile:

2 Likes

Fair point

I was looking through your documentation and found reference to staging environments. Do you have any advice/tutorials on setting up a multinode development environment. I have started working on a project generator that combines docker and elixir (in fact it is not necessary to even have elixir installed on the host machine) and would like to make sure that it is as close as a production setup as possible. see getting started here https://hexdocs.pm/tokumei/

Perhaps in the longer term the generator might even include a gigalixir option so that I could setup a project with a local dev environment all ready to deploy to production their. Possibly getting ahead of myself there :slight_smile: