Releases of course are not a new thing at all, but I’ve only used them in one prior project that used distillery and edeliver for deployment to a VPS. At work we use Heroku and the buildpack and guides for that have always focused on mix deployment. Actually I’m contemplating writing a new buildpack that would use releases, and migrating to that. The other strategy available on Heroku is using Docker images, which would have the advantage of being more portable but still does not support features we really like such as PR apps.
As far as I know you can pass all VM options either via ERL_ZFLAGS or ERL_FLAGS.
I see the benefit in traditional server deployments but in the context of a dockerized application we achieve the same thing (as do traditional Heroku buildpack slugs) - at the end of the build you have a single artifact to deploy and manage. Yes your source code is included and that may be a factor for ISV doing on-premise deployment but this is not relevant to me or my projects. In any case the BEAM code is not obfuscated and can be decompiled to Erlang quite easily.
I may be missing something of significance here because I do not use umbrella projects but I think I can supply different configuration to each application in an Umbrella project. Maybe someone can provide an example that sheds some light on this?
Again this is very relevant in traditional server deployments, but in containers it works fine to just use mix commands (and they are the same commands you use in dev!); the container itself is the entry point that will be started/stopped/restarted.
Does this actually work? This just crashes for me. It was my understanding that embedded mode requires that there be a list of the beam files the application actually needs so that it can eager load them. Part of what a release does is compile this list.
Sort of. We build and run in containers. The build container is enormous since it requires that we have a working Erlang / Elixir, plus all the dependencies you need to build those. The run container is an incredibly minimal Linux image with just the release on it. This difference is even more stark for applications that contain various NIFs and need build tools for that (Rust).
If you use releases, you have a much smaller image overall, and you’re basically guaranteed that the only layer to change per version is just the release itself.
Really though it’s a cost benefit analysis question. If you don’t use releases you still need to manually re-implement (without mistakes!) embedded mode, setting up a remsh, and optimize your docker images for building as well as execution, etc. You can get rid of all that headache by just calling mix release.
Yes it seems to work fine locally for me in MIX_ENV=prod mode, but I admit I don’t use it in production. It does indeed crash in MIX_ENV=dev. Perhaps its not actually working in embedded mode when there is no manifest? If that’s the case its definitely an advantage to using releases, though I’ve never observed latency issues on restarts I haven’t tried to measure them either.
This is a really good point and is very compelling.
At Aircloak, we have containerized deployments, and we still use OTP releases. Besides what @benwilson512 said, there are some other benefits, such as getting a remote iex shell, or the ability to execute custom commands inside the running system. Polite system termination is also supported out of the box.
That’s what I planned to do when I deploy my first Phoenix app but how do you deal with testing that released image? If it’s a super minimal image with nothing but the release, that means you can’t run any test suites against it right?
In other words, if you build that release image on a CI server (based off the bulky dev image having its tests pass), aren’t you effectively shipping an untested release to production?
Tests are run in MIX_ENV=test, while releases build/mix runs in MIX_ENV=prod so you’re essentially never testing the production build to begin with. There will always be some differences between what you ship to production vs what you run tests against (besides maybe for external integration tests).
As @LostKobrakai notes, your Elixir test suite is already a different compilation artifact from the production beam files.
This isn’t actually a huge concern though either for MIX_ENV=prod mix phx.server or releases. The only thing the test suite can validate is your application logic. It can’t validate that your production config is setup such that your container can actually start. This is what liveliness and readiness checks are for. Your deployment process should be such that no traffic is routed to the new container until it passes both liveliness (your application is running) as well as readiness checks (it has finished the boot process and can handle load) anyway. This is the final kind of validation, and it works perfectly well regardless of how you start your app.
Yeah that makes sense. Fire off a health check to a route to ensure it reports a 200 (the app booted and is ready). That’s something I do already in other tech stacks, but Elixir is slightly different in the sense that with a Flask or Rails app, the exact image I built and pushed to Docker Hub in CI is what ends up running in production which in turn is also the exact image I was running locally too.
ehhhh not as much as you might thing. Rails, for example, metaprograms so much based on configuration that you aren’t really running the same thing. A classic example would be Active Record, where whole methods are metaprogrammed in by reflecting on the database tables. You’re running the same code sure, but when it comes to the actual byte code being run it’s all over the map.
If the “use the same image” is a hard requirement by the way, there’s nothing stopping you from doing mix release in the same image you do mix test in. All this really gives you though in all of these languages is the guarantee that you’re using the same code, but this is something git gives you, you don’t need docker to give it to you.
The release-with-two-images approach has the same code oriented guarantees. You use git to make sure you’re running the same code, but that code, just like flask / rails, is parameterized. You insert one set of parameters and do tests, you insert another set of parameters, get a new artifact, and run it in production. It just so happens that in the case of Rails there’s a different ratio of how much parameterization happens at runtime vs compile time.
I think the main point I’m trying to make is that mix test and rspec are not tools for testing production artifacts, and just that your production artifact is on the same docker image as where you ran mix test doesn’t change that. Validation in production looks more like staging environments, monitoring, and status checks. For example we do run identical images in both staging and production for that purpose precisely.
I worked on a Java system where the readiness checks were the integration test suite. Once integration tests had passed it was safe to add to the load balancer. This strategy also had the benefit of warming up the JVMs. That might be overkill for a lot of people but these ideas don’t need to be mutually exclusive.
You get a smaller size, which is not only nice for transporting purposes but also for shrinking the attack surface. The same reason container images leave out a lot of the base OS. Why bring in more than you need?
The “multiple releases” section is referring to building a release with varying components. You might have a project with apps only necessary in particular environments. In order to have those included and started you build a release that contains them (maybe you have separate database apps or separate release for a server and a client).
Which also brings to the issue of consistent startup process. You don’t necessarily have a single application you can start that will start all other applications required. Now you must write your own startup code that ensures all top level applications start.
Why do this and include libraries you don’t need when you can just build a release? Plus it means you aren’t tied to a container.
-mode embedded is just disabling dynamic code loading, it isn’t preloading anything. For this reason, I would actually expect simply setting embedded to fail at some point.
Releases allow you to dynamically configure kernel, stdlib and elixir applications. This is useful to configure the distribution, Erlang’s built-in logger, and other services. To do this using the flags above, you would have to implement this logic in the shell or in other scripts and convert them to command line flags/env var when starting the VM (if at all possible).
Removing source code and other artefacts also reduce the size for deployment. Also note the bytecode can be encrypted if you don’t want folks to decompile it.
I agree this one is pretty much the same. With releases, you can also change the mode applications are started, but I think this would be used rarely in practice.
The management scripts provide more: such as running as a daemon or installing as a window service. It can be done with Mix but they are quite annoying to setup. But, similarly to the above, I don’t think those will be used frequently.
But the Beam will need the private key used to encrypt the bytecode in order to be able to execute the bytecode, thus you will need to ship the private key to where you deploy, thus it only protects the bytecode in transit to the deployment target, or am I missing something?
You can ship the release to a customer, without providing them the source code, but still get to debug with the source if you log on their machine and input the key. The original use case of Erlang is telephony switches where this probably makes sense.