What do you need from a deployment tool?

Hm… For phoenix, a default Ansible playbook that uses destillery , with support for postgresql and Letsencrypt on top of raw cowboy or cowboy behing Nginx as a reverse proxy. Ansible is not the most portable thing but it gets the job done. For other applications I don’t know enough to suggest any ideas.

2 Likes

I actually do like the direction that Michał suggested for libraries!

When working on my own deployments and researching my recent blogpost, I really learned to appreciate this approach. I think it’s a great idea to have this kind of configuration where it’s being used.

With Phoenix and Ecto and Phoenix now both offering callback configuration, I really hope this will be adopted as the new standard by more and more applications. I will certainly implement it for the next major release of Belt.

1 Like

I have my own personal library of macros that call into a Cthulhu dimension at compile time for some computationally intensive tasks, which makes them useful to detect race conditions and solve the halting problem on my code. The BEAM philosophy of “let it crash” is a boon for those like me who work with unstable and flaky inter-dimensional connections, and the power of Elixir macros allows for easy coding of these zero cost abstractions to check for correctness at compile time. Actually, I had to sign an EULA with Cthulhu, which I didn’t actually read, but it’s probably ok: “Something, something debt something something countless primeval aeons something something repaid in blood”, didn’t really pay attention.

Non-determinism is a problem, though: when working with dimensions that don’t respect the linear flow of time, it’s hard to guarantee a stable order of events, which is a challenge on the BEAM. I’ve heard Javascript would probably shine in this space, but I really don’t want to depend on node.

I’d put them up on GitHub, but then I’d have to sacrifice some goats to Azatoth, or Facebook owns my code and possibly my soul (don’t know, didn’t really read the license). Whatever.

6 Likes

Call-backs are certainly flexible, but they just shift the problem onto the library user: I still have to work out how to load the config from the environment, and now I have to implement the particular call-back for every library I’m using, rather than do it centrally. It’s ‘I surrender: sort it out yourself’ :wink:

When I go to an unfamiliar application, one of the things I ask is “where’s the config?”. If it’s in a config.exs file, even with funny tupples, I can immediately see where it’s at; if it’s spread across several call-back modules I now need to hunt for those, and there’s no particular pattern to search for.

On the other hand, if I can find the call-back, at least I can be sure what config it is actually using, rather than there possibly being a disconnect between what’s configured, and what’s actually used, which makes the configuration potentially bloated and fragile.

Thus I don’t think we’re there yet with either solution.

Here’s a thought: in a Phoenix project you can do mix phx.routes and see all the routes, and the modules they are implemented in; imagine if you could do mix config and see what modules needed what config?

2 Likes

This is good for static configuration. And I’m totally with your that visibility is key in that regard. But there’s also dynamic configuration, like if you want to start a custom ecto repo based on user input (imagine a db management tool). That’s where the runtime configuration aspect comes into play. In such a case callbacks are not shifting the problem to the library user, but they actually enable this type of application.

2 Likes

I fall into the category that deploy services to single machines using Ansible. If there is an official “deploy” tool I would hope/expect that it doesn’t impose any extra burden and just expose primitives that I can choose to use or not.

Certainly no dependencies, certainly not assumptions on Docker or any other VM being available, certainly no bash scripts (Windows!). It should be only dependant on pure Elixir/Erlang.

So far, for me Distillery works fine, it creates a release which I can then copy to the remote server. I still have to write a systemd script but there is documentation for that and it’s simple to do. It also tries to support Windows (which I don’t care about, but good to know).

3 Likes

Reminded me of this this :smiley:

On the topic, I mostly want a configuration convention that everyone uses.

2 Likes

The Elixir build tool problem? Or something with a broader scope?

1 Like

I’m in agreement with @DianaOlympos, @talentdeficit, @wmnnd, and @bitwalker here. Creating a new “deployment tool” specific to the Elixir ecosystem seems like it’s a solution in need of a problem.

When I first started using Elixir, I often heard that deployment was an issue, and I started repeating that as if it was a mantra. The company I worked with at the time started by deploying to Heroku, and that worked well for our purposes. I moved companies, and my current company was also deploying Elixir to Heroku. As our needs grew, I learned how to build releases with Distillery. I put together a deployment pipeline using non-Elixir tools and started shipping releases from our build servers through to staging and production.

And now I question what this “deployment is an issue” comment really means.

Right now my team uses AWS CodeDeploy for deployment purposes. A release artifact (built using Distillery) is bundled with rules for CodeDeploy’s engine. The bundle is pushed to S3 from our build server and registered with CodeDeploy. Our current setup requires manual triggering of a deploy, and we currently use the CodeDeploy UI for that. We utilize the same bundle for both staging and production. Our system loads as much of its configuration as possible at runtime via environment variables.

I don’t solve deployment using Elixir focused tooling. I use generic tooling because it’s a generic problem. I could use the same tools now with Go, Java, C#, etc.

I could also build it into an AMI using Packer, seal it into a Docker image, scp it to a fleet of servers using Ansible, bundle it into a .deb…there’s a list of generic tools that solve these issues.

What I think we need is more documentation around best practices for building releases, managing runtime configuration, integrating with existing infrastructure tooling, and incorporating operating concerns into our codebases and executables. That’s what I’m picking up from this conversation and other conversations I’ve had in the community.

From what I can tell, people want/need:

  • A clearer picture of why to use releases and why running in production via mix is not best practice
  • Guidance on runtime configuration management (and particularly helping package authors write configuration-system agnostic packages)
  • More information about taking a release and running it on Ubuntu/Amazon Linux/Docker/K8s/the back of a turtle; this includes how to publish the release to the unit, write startup scripts, redirect logs to files or services, manage starting/stopping the release
  • Expanded guides on building operations logic into the codebase and releases (e.g., performing Ecto migrations without mix, and other things one normally would have made a rake task or the like for)

If someone asks you, “Now that I’ve built it, how do I launch it?” they’re looking for instructions. That doesn’t mean another tool is necessary.

12 Likes

I’m not sure that it’s a solution in need of a problem.

As an example, I have no desire to do any of what you just said to deploy my applications, hence I use Heroku and put up with the cost and other down-sides.

If there was a tool that improved either the costs or the down-sides of Heroku, I’d definitely be interested in that vs rolling my own deployment system. I can’t help but feel that there are others like me with no devops experience or desire to learn more devops who would be interested in something like this.

3 Likes

@benwalks But how would you improve the costs or down-sides that $PaaS might have with a different deploy tool? An Elixir-specific deploy tool wouldn’t administer your Linux server and manage your database services for you.

1 Like

I guess I was referring to the fact that I can git push heroku master my application up to Heroku on a whim.

2 Likes

i don’t think anyone is arguing that deployment is easy or trivial. i just don’t think an elixir native tool is going to be any easier to use than tools like ansible, terraform, chef, etc. these tools aren’t bad because they are written in non-elixir languages, they are bad because deployment is a particularly hard problem to solve in a general way

3 Likes

But in this case, git is your deploy tool. Following what @wmnnd was saying, you’re looking for an Platform-as-a-Service that is tailored to the needs of Elixir applications. Something like that won’t necessarily be helped by a new open-source Elixir deploy tool. It sounds like you want @jesse’s Gigalixir or something similar.

2 Likes

A tool that makes a basic case easy would not be a bad thing to have.

1 Like

What are the specifications of the basic case you are referring to?

1 Like

something like deploying a phoenix app release to a single server

1 Like

for the git push deploy there is also gatling https://github.com/hashrocket/gatling - works great on a single server. (or the paas providers - heroku, gigalixir, or even nanobox)

On the overall arch I would say the configuration story is what threw me off and was confusing/frustrating, and a singular solution for the different deploy styles (and dev env!) is where I see the biggest ROI.

also keep an eye on something like alloyCI - AlloyCI - Continuous Integration, Deployment, and Delivery coordinator, written in Elixir

2 Likes

I’m in agreement with @DavidAntaramian and @bitwalker, et al so I won’t repeat them, but I just wanted to share the most common pain points I’ve seen from customers while running gigalixir.com (I’m the founder).

We use git push gigalixir master to send code to a build server, Docker + Distillery to build a release, and Kubernetes to orchestrate the containers.

By far the most common problem I’ve seen is forgetting to put server: true in prod.exs. If there is any way to set this as the default for releases, that would save a bunch of time for a lot of newcomers.

The second most common pain point I’ve seen is migrations. If there is a way to build the ReleaseTasks module shown here into Distillery and add a built-in bin/app migrate command or something like it, that might help newcomers from spending a lot of time on production-specific setup.

Once they get migrations working, they usually want to run them automatically on each deploy. I usually send them to the Distillery boot hooks documentation, but it’s again another thing to spend time on. If this could be “built-in” somehow, that might help.

Another common problem is using System.get_env/1 in prod.exs instead of Distillery’s ${VAR_NAME} and setting REPLACE_OS_VARS=true.

Anyway, this sort of echos what @DavidAntaramian and @bitwalker mentioned about the difference between mix and releases, but is based on my experience with customer issues.

6 Likes

I really really think this should be in the default phoenix templates in prod.exs. I’m not sure why it is not actually…

4 Likes