Usefulness of a library to manage build and deploying to remotes?

I’ve been working on a lib to facilitate the building and deployment of releases. The idea is that it will evolve to something also able to do/support devops work, but the first phase is just to build releases (normally or using a docker flow, and deploy it to one(many) target(s) host. It’s similar in its interface to edeliver but uses elixir (shelling out to cmd locally) and erlang’s ssh module to connect to the remote. Its working fine for this first step and I would like to know if you think this would be useful and what kinds of things you wanted to see on it?

It needs to be init before being used with
mix deployer init

Then the configuration has to be filled:

# Use this file to config the options for the Deployer lib.

import Config

config :targets, [
  prod: [
    # host to connect to by ssh
    host: "host_address",
    # user to connect under
    user: "user_for_the_host",
    # name of the app to be releases
    name: "your_app_release_name",
    #path where deployer places its configuration and releases on the target host,
    path: "~/some/path/on/the/host",
    # absolute or relative path to the key to use for ssh`
    ssh_key: "~/.ssh/your_key",
    # you can apply tags to each release - when you build a release the `:latest` tag is always added to it and removed from any other release with the same name in the store
    tags: ["some_tag", "another_one"],
    # after deploy the release packet prepared during build is untar'ed on the server, you can execute additional steps after the deploying, It can be ommitted or set to nil
    after_mfa: nil # {Module, :function_name, ["arg1", 2, :three]}
  ]
  # ,
  # another_target: [ .... ]
]

# config :groups, [
  # name_of_group: [:prod, :another_target]
  # can_be_many: [:one_host, :another_one]
# ]

config :builders, [
  prod: {Deployer.Builder.Docker, :build, []}
]

Then one can run like with edeliver

mix deployer build_and_deploy target=prod
# this would build, in this case a release according to the project definition, in a docker container with a specified Dockerfile (by default one at the root of the project), extract a tar of the release, copy it to the local store and then connect to the host, upload the tar, unpack it, and symlink it. The same in individual steps would be:
mix deployer build target=prod
mix deployer deploy target=prod
# as well as
mix deployer manage
mix deployer manage.remote target=prod

Which will be the basis for the “devops” things.

I intend to make each step configurable so flows can be built on top of it and expose some helpers & modules for allowing users to use them in their flows (like the ssh server and upload of files), the build & deploy parts being swappable by any mfa to which the “build”/“deploy” context is passed, before&after hooks etc.

Would this be useful to make as an open source lib? Any suggestions as to what it should be able to do?

2 Likes

Can you make a compelling argument for why your approach is better or at least meaningfully different than either the general-purpose Ansible (for example, HashNuke has published a Phoenix deployment project built upon it) or the Elixir-specific edeliver, which is also centered around releases?

I’m philosophically opposed to rewriting existing solutions in terms of Mix tasks unless they can also be improved upon along the way. That’s because from my POV, Elixir itself is not unusually well- or poorly-suited for system administration tasks, but there’s lots of prior art in other languages that are in relatively widespread use. Idempotence and cross-OS compatibility is high-effort to get right in greenfield work.

1 Like

Thanks @shanesveller

Not sure I can, but I’ll try - before that, just a disclaimer, I’m building it for my own use (and as a learning experiment) but wanted to see if others would find it useful - there’s also a whole lot that isn’t yet implemented.

I’ve used edeliver with Distillery and found it great to do what it is supposed to do. It sometimes fails in some cryptic ways - although the issue is not with it itself but something else (permissions, git, etc). I haven’t found clear ways to add to it or tweak its behaviour but that might just be my ineptitude. I wanted something that was written in elixir for legibility and with more of a “plugin” interface, where for instance you could define a target that does one type of build, with some hooks, and a type of deployment, while another target does a different kind of build and deploys also differently. I also wanted it to be buildable locally through docker as the default, but supporting other plugins (like perhaps a deploy for a target test is a push of the current thing to a CI or who knows what). I also wanted some interface that allows to cleanup the remotes etc and do that housekeeping from the command line.

For instance the deploy_and_build taks is just this:

defmodule Mix.Tasks.Deployer.BuildAndDeploy do
  use Mix.Task

  alias Deployer.Env

  alias Deployer.Helpers, as: DH
  alias DH.ANSI, as: AH

  @required_args [:target]

  @shortdoc "Builds & Deploys a Release"
  def run(args \\ []) do
    try do
      with(
        {_, %Env{} = ctx} <- {:load_ctx, DH.load_config(args)},
        {_, :ok} <- {:enforce_args, DH.enforce_args(ctx, @required_args)},
        {_, {:ok, n_ctx}} <- {:run_builder, Mix.Task.run(:"deployer.builder", ctx)},
        {_, {:ok, n_ctx_2}} <- {:run_deployer, Mix.Task.run(:"deployer.deploy", n_ctx)}
      ) do
        AH.success("Finished Build and Deploy")
        {:ok, n_ctx_2}
      else
        error ->
          AH.error("Deployer Error: #{inspect error}")
      end
    after
      DH.DETS.close()
    end
  end
end

And the builder task is

defmodule Mix.Tasks.Deployer.Builder do
  use Mix.Task

  alias Deployer.Release, as: Rel
  alias Deployer.Env
  
  alias Deployer.Helpers, as: DH
  alias DH.ANSI, as: AH

  @type valid_build :: {%Env{}, %Rel{}} | :halt | {:halt, atom, atom, list}

  @required_args [:target]

  def run(args \\ []) do
    try do
      with(
        {_, %Env{} = ctx} <- {:load_ctx, DH.load_config(args)},
        {_, :ok} <- {:enforce_args, DH.enforce_args(ctx, @required_args)},
        {_, {:ok, builder}} <- {:check_if_has_builder, DH.decide_builder(ctx)},
        {_, {%Env{} = ctx_2, %Rel{path: path} = res}} <- {:run_build, run_build(builder, ctx)},
        {_, :ok} <- {:ensure_gzip_exists, ensure_gzipped_file_exists(path)}
      ) do
        DH.DETS.add_release(res, ctx_2)
        AH.success("Builder completed")
        {:ok, ctx_2}
      else
        return_value -> check_return_value(return_value)
      end
    after
      DH.DETS.close()
    end
  end

  @spec check_return_value({:run_build, valid_build} | {:run_build, any()}) :: {:ok, %Env{}} | any()
  defp check_return_value(value) do
    case value do
      {:run_build, :halt} ->
        AH.success("Builder completed")
        :ok
        
      {:run_build, {:halt, m, f, a}} ->
        case apply(m, f, a) do
          {%Env{} = ctx_2, %Deployer.Release{path: path} = res} ->
            case ensure_gzipped_file_exists(path) do
              :ok ->
                DH.DETS.add_release(res, ctx_2)
                AH.success("Builder completed")
                {:ok, ctx_2}
              error ->
                AH.error("Couldn't find a finished file on path: #{inspect path}")
                error
            end
          {:ok, %Env{} = ctx_2} ->
            AH.success("Builder completed")
            {:ok, ctx_2}
          error -> {:error, error}
        end
      error ->
        AH.error(error)
        {:error, error}
    end
  end

  @spec run_build({atom, atom, list()}, %Env{}) :: valid_build | any()
  defp run_build({m, f, a}, ctx), do: apply(m, f, [ctx | a])

  @spec ensure_gzipped_file_exists(String.t) :: :ok | {:file_doesnt_exist, String.t}
  defp ensure_gzipped_file_exists(path) do
    case File.exists?(path) do
      true -> :ok
      _ -> {:file_doesnt_exist, path}
    end
  end
end

So it works similarly to a pipeline and you can build your own. Basically that would be the main difference (that’s yet unpolished code…)

Regarding Ansible, I haven’t used it, only chef (in like the ansible lookalike mode, where you deploy the chef sdk and the recipes/templates and then run recipes). It does that pretty well, so the only reason is to have sort of a uniform way to do it, in the same language and leverage some tools that are part of erlang, like the SSH module (in my tests using it, for instance in a 40 something sec total upload time, I found it to be 3 to 6 seconds faster to upload than scp if streaming the file and using the ssh write functions directly instead of write_file), and because you’ll be deploying releases it would be easier in the future to add additional capabilities to it, if it’s written in elixir - even build some sort of “app” that can run on the remote as a control point.

But sincerely - I’m not aiming to do all the functionality of something like ansible or chef right now at all. Just the basic stuff, replace systemd files, load environments, etc, so to have a solution that can do that stuff and be extendable. I also understand that the common “standard” is build docker containers, deploy to a CI and then spin an instance with that - but if it was pluggable there’s no reason why it couldn’t do those things or launch a docker compose test at the end of the build before deploying, etc.
Just some ramblings, does any of it sound like good reasons?

I’ve had consistent trouble with edeliver over the course of the years – the same cryptic failures that you mention – and would be curious to know if you have released your tool?

Hey,

not yet really as there’s still plenty of things to implement.

What is working/done:

  • Tasks to:

    • init the dependency config folder, creates a dets file to store the local releases, a folder for the tars, and a config at the root of the mix project
    • Task to build and extract the release (right now the only builder available is the docker builder)
    • Task to deploy the tar into a server
    • Task to build&deploy in one go
    • Task to manage the “release store”, both locally and on the remote (allows to clean the releases either locally or remotely by tags)
  • Builders

    • Docker build task <- the only current way of working, it accepts a docker file (defaults to Dockerfile at the root of the mix project folder), builds the image, extracts the release from inside the docker, tars it.
  • Miscellaneous

    • Parsing of the arguments from the command line and allowing tasks to be called in a successive way, inheriting the settings from the original task, it creates a “context” schema that has a bunch of fields, info, config and the “env” field holding the command line arguments, that can be accessed/changed throughout the tasks running
    • SSH interface - uses the erlang ssh utilities to connect to the remote host, it’s a genserver and can be used to request different SSH connections, it has support for executing remote commands on the host and to send files

Things that are still missing that I would like to have done before putting it publicly available:

  • Allow building releases regularly (ie, without docker - this is easy)
  • Allow building releases remotely (this is harder)
  • Deciding on the symlink strategy - right now once it deploys correctly it automatically symlinks the /remote/namespace/current folder to the folder that was just untared. Not sure this is good behaviour or if it should have a separate command “activate” or something that actually does the symlinking and then stops the running node so the new one can come up…
  • Making sure the config and subsequent schemas for targets, groups, and builders are ok and extendable, same for the context that gets passed around when you chain tasks
  • A way to define a set of scripts that get placed on the remote once you init the remote “store” there, so that you can then run them?
  • Parse the PEM/ssh keys in memory - I haven’t yet been able to decode erlang’s docs for how to do it - so basically I’m copying whatever file you give as the path, into your local “deployer” folder, renaming it to “id_rsa” and then give that path to erlang’s ssh, after the connection is made, removing that temporary folder. This works, but is so hacky I don’t even know what to say about my skills with ssh.
  • Allowing to start/control the released app - I use systemd, so basically I just deploy, have the systemd service pointing to the symlink and that’s it, I just stop the service and the new one comes live, but it should allow some basic functionality if not using that because if you deploy then on a successful deploy it changes the symlink for the new folder that was just untar’ed, so it needs some way of knowing which was the version that was running previously so that you could shut it down from the command line easily (maybe this warrants 2 folders, one “current” (or then “last”) and one “running” which would point to the release running, so then the future-not-yet-written “activate” command would parse the symlink info from the current, parse the info for the running, switch the running symlink, and then with the info previously parsed stop the “old” release and restart the new one…
  • Implement hooks…
  • Versioning - right now it uses the “app_name” you set in the config, an md5 hash calculated after taring the release with everything (and the “latest” tag added to it), but it would probably be decent to support the mix project details or the release ones…

Things I would like to add at some point but don’t consider crucial:

  • Allow multiple deployments simultaneously (a bit hard, the ssh interface can be started for as many different hosts as wanted and concurrently, but there are a lot of things I need to consider in terms of how it actually flows once you do mix deployer deploy group=some_group, should it halt if some fail, should it clean up what has been uploaded, etc)
  • Some form of server provisioning - Since this is quite a lot of things to do I’ll probably just postpone it and instead focus on having a decent way to use scripts or wtv?
  • Polishing the interface for “store” management - right now if you want to delete a specific release you need to type/copy the md5 hash that is generated, it would be great to have a simple aliasing by letters/numbers (although the pruning works fine by name)

And all the remaining thousand small papercut things ofc, plus the things I’m forgetting and/or not seeing right now…

# Use this file to config the options for the Deployer lib.

import Config

config :targets, [
  prod: [
    host: "35.x.x.x",
    user: "deploy",
    name: "name_for_namespace",
    path: "path/on_remote",
    ssh_key: "~/.ssh/some.pem",
    tags: ["some_name"]
  ],
  aggregator: [
    host: "35.x.x.x",
    user: "deploy",
    name: "aggregator",
    path: "aggregator",
    ssh_key: "~/.ssh/some.pem",
    tags: ["other name"]
  ]
]

config :builders, [
  prod: {Deployer.Builder.Docker, :build, [%{dockerfile: "Dockerfile"}]},
  aggregator: {Deployer.Builder.Docker, :build, [%{dockerfile: "Dockerfile_aggregator"}]}
]

This works and I’ve been using it for my own small deployments,
And it works quite ok in this limited functionality.

mix deployer init

mix deployer build target=prod
mix deployer deploy target=prod

And, having written all this - I’m not sure the “tasks” approach is the best way to go about it.

the things is, I’ve been mostly working on it has I need. If you are interested I can try to start polishing it a bit and perhaps put it online to see if I can get some feedback on it but it’s quite far from “finished”… What kind of deployment would you be doing?

Perhaps I should just start looking into moving everything into containers and kubernetes.

1 Like

Sounds pretty cool! I ended up finding a library called Bootleg (https://github.com/labzero/bootleg) and was surprised at how straightforward it was to figure out and get working compared to edeliver which I found to be a frequent problem and always very difficult to debug when it wasn’t working.