I hesitated to post this here because I don’t want you to think I’m spamming, but I’ve been working on a Platform-as-a-Service designed just for Elixir/Phoenix and I’m looking for some beta testers and some honest feedback, and I thought this was the best place to look.
I call it Gigalixir, and it’s very much like Heroku except it uses supports hot upgrades, node clustering, and remote observer. It also does not restart instances every 24 hours or limit you to 50 concurrent connections per instance.
If you’re willing to give me honest feedback, I’ll happily throw $75 on your account. You can find more information about it at www.gigalixir.com and the quick start is at gigalixir.readthedocs.io.
Please let me know if this post is considered inappropriate and I’ll remove it. If you know of a better place I can try and find feedback, I’d really appreciate it!
Adding locations other than the US is on our long-term roadmap, but exactly where and when we go will depend on what customers are asking for. If you need a ton of instances, let me know and I’ll see what I can do to help you.
Providing databases-as-a-service is also on our roadmap, but I’m not sure when it’ll happen. For more information about how to use a third party database-as-a-service, take a look at the last paragraph of How to Connect a Database.
In the meantime, however, I can manually set up a Postgres instance for you on Google Cloud SQL and include the cost on your Gigalixir invoice. Just let me know once you’ve created an account and what disk size, vCPU, and memory you’d like. I won’t charge you more than what Google charges. Keep in mind, Google Cloud SQL Postgres is still in beta, but then, so is Gigalixir =)
Thanks. I’m not keen on the idea of a remote database as I can’t imagine response times are great, even when the database is in the same country/area. But I’d be interested to hear others’ views on that.
It’s very unlikely I’ll be needing many instances of anything anytime soon, and my current project’s requirements are extremely modest. But I’ll keep an eye out for updates. All the best with the service!
I’m curious, my newest server (got about 5 years ago now, I need to get yet another new one someday, I have like 4… >.>) is 64 gigs ram, 24 physical cores, 1 terabit unmetered connection with multiple dedicated IP’s on both IPv4 and IPv6 for about $280/month (I have a few extras like backup mirroring and such that I’ve never, thankfully, needed to use, in addition to my local routine backups). The price on the website I am either sorely mis-reading or it is substantially more expensive by orders of magnitude?
And what’s the price for your server if you want to start another instance for 10 minutes for a one off batch job?
I haven’t read enough about gigalixir’s offering to have an opinion if their pricing is fair, but this seems to be a completely different product than just a dedicated server.
Not a thing, I can host as many containers as I want on that. Admittedly most of the time I’m using <8gigs ram with the cpu barely flickering… All my containers are well containered as well, so I can easily split them off and spool them on one of my other servers if I wished (docker). I think my next big server will probably run some variant of illumos, take down my 2-3 old servers at the same time (my oldest is 17 years old… >.>).
I’m still not sure what this devops cost is that so many allude to, managing a server is pretty simple.
As for operational risk, well my servers have better uptime than Amazon’s services so…
Really though, my containers are easy to swap around my servers if something starts to happen, I just ssh in, run a couple commands that I have scripted up, a new container spools up and redirects setup until DNS propagates, then done.
Huh?
Overall I’ve been always running ‘baremetal’ FreeBSD in the old days with lots of jails, docker nowadays on debian and redhat(work), and the effort involved has always been minimal, if hardware breaks then my contracts get that fixed (image rollover to new machine in about a minute then back when the hardware is repaired, though everything is hotswap now so that’s not been tested in probably a decade…), if software breaks, well I made it 90% of the time so I know what happened and I can fix it in generally seconds. My big server’s downtime has never occurred in its 5 years of life except for purposeful and announced ahead-of-time kernel updates during dead hours, which take all of a minute to complete (with getting 500 maintenance errors for clients and to retry in a minute), otherwise software updates are done via rolling restarts where the new system comes up while the old is running then I swap the socket over. It is all quite simple to do, and considering my income is very little (not enough to live on at all, hence dayjob as well, I host a lot of open source and community sites for free…) something like Heroku or Amazon seem like outright extortion to me (especially considering that Amazon’s engineers have stated that their IT services like PaaS and others bring in more profit their there shipping business, so it does not sound cheap).
@OvermindDL1 it is more expensive than bare metal and more expensive than an infratructure-as-a-service like AWS, but we think the amount of time it saves you is worth much more than what you pay. You just git push gigalixir and forget about
Installing Kubernetes in high availability mode with multiple masters in a multi-zone configuration with autoscaling.
Configuring ingresses, services, deployments, replica sets, pods, volumes, secrets, etc.
Writing Dockerfiles to compile your code, build assets, build a release, and run your app.
Managing app configuration and secrets.
Tracking releases.
Rollbacks.
How to cluster nodes together: ensuring containers have network access to one another, can discover each other, and reconnect when networks partition.
How to perform hot upgrades as easily as doing a rolling restart.
How to connect a remote observer to a production node with SSH tunnels and iptables.
Running one-off or batch jobs like database migrations.
TLS certificates
Aggregating logs and tailing them.
Administration/Maintenance
Stability/Reliability
Capacity/Elasticity: ensuring you always have another machine when you need it
Security
Websockets: they don’t always work out of the box e.g. behind a load balancer
Usability: making all the above easy to use
It took us 6 months to put all the above together and we have a ton of experience running apps in production. At my last company, we went from Heroku to AWS to bare metal to try and save money, but the amount of effort it took to rebuild everything we took for granted at Heroku was enormous.
If you are running 100,000 machines, then it’s probably worth the developer time to migrate off PaaS, but for most smaller companies, it makes sense to outsource the devops so you can focus on building your app and providing unique value to your customers.
@outlog every 1GB of memory comes with 1 CPU share. 1 CPU share is “200m” as defined using Kubernetes CPU requests or roughly 20% of a Google Compute Engine core guaranteed. If you are on a machine with other containers that don’t use much CPU, you can use as much CPU as you like. For more infromation see Replica Sizing.
Eh, wrote once a long time ago, not touched them since, and heck, elixir’s distillery/exrm can generate them for you if you want fresh ones.
Yeeeah I don’t trust most things to do this right, since almost none of them do, I’m anal about going over every single option… >.>
Already versioned by OTP’s release system.
Ditto, you can start up an older one just as easily as the latest except with an extra argument.
For my erlang servers that is all I do, single command.
I do run a lot of non-erlang stuff though, I just roll them.
Also a single ssh command (and I have a lot of aliases to reduce typing as well).
Automatically managed. ^.^
I use a linux package for that, forgot what it is called, it works well, not touched it in a decade except to add new log file locations as I install new things (since everything likes to do log files different to everything else… >.>).
I’ve never hated doing that, it is fun! I constantly run out of tasks to do on my servers. ^.^
Well my uptime I better than Amazon’s so far, so… ^.^
I way way overbudget my hardware (hardware’s cheap, services are not), never ever needed this, and it ends up much cheaper this way too.
I try to only use software I trust, for the few that I do not I provision stupid things elsewhere and let them run remotely in a read-only setup (*cough*jenkins*cough*, I really really hate that thing, and php forums I host for people that refuse to discourse on over).
I’ve never had issues, and I use a lot of websockets on some of the sites.
All simple commands over ssh.
I exceptionallyVERY hate web gui’s for all that, they just make it harder to get stuff done and even doing something as simple as restarting a service involves extra clicks to get to the area, move the mouse, hit a button, pray it is working since you can’t see a real-time log, etc… Where as I just type something like sshcmd blahserver 'cservice roll frontend-nginx' or fully log in or so, of which takes 0-2 seconds to type and hit enter, then I watch the output log streaming by to verify it works. ^.^
I don’t even know what is gained with Heroku. At my current job and my last job they are both crazy-VM heavy via VMWare. Have old servers, new servers, hard drive swaps don’t go down, generator/battery backups for 48 hours with on-site fuel for both. If a hardware server goes down VMWare auto-migrates it until they can fix it (snoopy needs to be replaced already…). My last job had the 200k+ machines, but my current one has probably 2000 machines at most, the IT department is 3 people (with one retiring next month) with a server room that is just a very cold back room of 6 rack’s with a couple dozen servers in them blinking away. Something like Heroku would bankrupt this place (they’ve checked).
@OvermindDL1 my point isn’t that PaaSes are the only right way, but that for some, it makes more sense than doing things yourself. As a sort of silly example, I’d rather buy my coffee at Starbucks than try and save money by growing my own beans, roasting them, grinding them, etc. But for some, especially those with an interest in it, growing your own beans might indeed make sense.
All good, I’m genuinely curious though (else I’d have never poked up in this thread) what the cases are that someone would pick something like Amazon for.
In all cases in every way I’ve been able to calculate it has always seemed substantially more expensive than alternative methods, not just a little more expensive, but orders of magnitude more expensive. I am really quite curious what they’d be used for other than just people who do not know how to manage the stuff they are running, like I could see a, say, restaurant hosting stuff (address, menu, etc…) this way since it is tiny traffic and they have no knowledge of the problem domain whatsoever and so would not even know to look for better alternatives (but at one of the cheaper ones that is, say, $5/month, that is all they need), but if someone does something like actually build this software to run then it seems odd to me…
Great to see this happening. Good luck with your beta and will definitely check it out when I’ve a bit of free time.
@OvermindDL1 There’s an opportunity cost for someone who doesn’t know how to manage this stuff really well like you with years of experience do. Is it better to spend time trying to get to your level or pay someone else to handle that and spend your time doing what you already know? If you can make your app generate e.g 2x profit in that time and the numbers work out then paying someone else to manage the hosting is an overall win.
(There is a lot of tacit knowledge that it’s easy to forget you know re ops IMO.)
You make many valid points but 100,000 machines is a very poetic exaggeration as is necessity for the majority of projects to run k8s. I think the major gripe of many people is that for many projects “cloud services”
a) the cost is huge
b) performance abysmal
c) uptime bad
This is nothing specific to your services although running on top of GCP clients are exposed to the combination of GCP f#$ups and your team’s mistakes on the risk side.
The major difference if that someone else has a name you can call that person(s) it’s one type of situation if it’s AWS or the like and you are not the client that moves the needle for them like Netflix have fun trying to resolve issues when s##t hits the fan.
@OvermindDL1 in addition to what @chrismcg said, AWS is useful if you need a lot of elasticity. At one company I worked for, we ran a continuous integration farm on AWS which would stretch up and down from thousands of machines during the day down to 0 at night.
Another possibility is some companies who experience hyper growth like say Instagram might have a hard time negotiating datacenter contracts, buying machines, installing them, validating them, etc quickly enough to keep up with product and feature growth.
There is also the ecosystem of services to consider. Even if you don’t find value in running EC2 instances, you might find value in spinning up a global load balancer in front of your instance or attaching an elastic block storage device to your instance with a click of a button.
So I spent some time looking at the service as a potential user. (I currently use Heroku for a Elixir/Phoenix project).
First I love the fact you are looking into this - even though my feedback below may not be what you want to hear. Keep plugging away and trying to find a solution! The Elixir community could use more ecosystem additions!
Now here’s my feedback:
I may have considered it if the price was significantly lower but I think the things added (hot swapping, clustering) are nice to haves at the small scale I am using it.
If/when my projects scale up (which would be where I really want the additional functionality), I am going to want to use AWS (or Google Cloud) to keep costs down and have easy access to their additional functionality (like analytics, multiple dbs, cloudfront, extra configuration capabilities, elasticity, etc…). In that scenario, if I did not do the dev ops myself, I probably would prefer a part-time dev ops who was familiar with our code base and who could monitor everything and just “take care of problems” as they occurred. (and yes, if I wanted to keep costs even lower I would buy my own racks/hardware but that is a totally different discussion).
I get that your solution may be less expensive than hiring a part-time dev ops, but at this point at least it does not give me peace-of-mind that things will be handled seamlessly without me needing to monitor it if something goes wrong. If I have to monitor and sleuth (say because my code sucks), I’d prefer to set it up myself so I at least know what is happening and certainly so I can save the monthly $. And if Erlang/OTP is smart enough to fix most problems, it feels like I should save the $ and set it up myself vs paying monthly for a service to do those initial configurations.
Of course, I could have also totally missed the value proposition, and if so, my apologies.
Btw- what might be interesting to me is an Amazon Lambda style service for Elixir that is consistently fast (i.e. Lambda is inconsistently slow). But honestly that is just me thinking it would be cool. I don’t exactly have a strong immediate need for it.