Usefulness of a library to manage build and deploying to remotes?

I’ve been working on a lib to facilitate the building and deployment of releases. The idea is that it will evolve to something also able to do/support devops work, but the first phase is just to build releases (normally or using a docker flow, and deploy it to one(many) target(s) host. It’s similar in its interface to edeliver but uses elixir (shelling out to cmd locally) and erlang’s ssh module to connect to the remote. Its working fine for this first step and I would like to know if you think this would be useful and what kinds of things you wanted to see on it?

It needs to be init before being used with
mix deployer init

Then the configuration has to be filled:

# Use this file to config the options for the Deployer lib.

import Config

config :targets, [
  prod: [
    # host to connect to by ssh
    host: "host_address",
    # user to connect under
    user: "user_for_the_host",
    # name of the app to be releases
    name: "your_app_release_name",
    #path where deployer places its configuration and releases on the target host,
    path: "~/some/path/on/the/host",
    # absolute or relative path to the key to use for ssh`
    ssh_key: "~/.ssh/your_key",
    # you can apply tags to each release - when you build a release the `:latest` tag is always added to it and removed from any other release with the same name in the store
    tags: ["some_tag", "another_one"],
    # after deploy the release packet prepared during build is untar'ed on the server, you can execute additional steps after the deploying, It can be ommitted or set to nil
    after_mfa: nil # {Module, :function_name, ["arg1", 2, :three]}
  ]
  # ,
  # another_target: [ .... ]
]

# config :groups, [
  # name_of_group: [:prod, :another_target]
  # can_be_many: [:one_host, :another_one]
# ]

config :builders, [
  prod: {Deployer.Builder.Docker, :build, []}
]

Then one can run like with edeliver

mix deployer build_and_deploy target=prod
# this would build, in this case a release according to the project definition, in a docker container with a specified Dockerfile (by default one at the root of the project), extract a tar of the release, copy it to the local store and then connect to the host, upload the tar, unpack it, and symlink it. The same in individual steps would be:
mix deployer build target=prod
mix deployer deploy target=prod
# as well as
mix deployer manage
mix deployer manage.remote target=prod

Which will be the basis for the “devops” things.

I intend to make each step configurable so flows can be built on top of it and expose some helpers & modules for allowing users to use them in their flows (like the ssh server and upload of files), the build & deploy parts being swappable by any mfa to which the “build”/“deploy” context is passed, before&after hooks etc.

Would this be useful to make as an open source lib? Any suggestions as to what it should be able to do?

Can you make a compelling argument for why your approach is better or at least meaningfully different than either the general-purpose Ansible (for example, HashNuke has published a Phoenix deployment project built upon it) or the Elixir-specific edeliver, which is also centered around releases?

I’m philosophically opposed to rewriting existing solutions in terms of Mix tasks unless they can also be improved upon along the way. That’s because from my POV, Elixir itself is not unusually well- or poorly-suited for system administration tasks, but there’s lots of prior art in other languages that are in relatively widespread use. Idempotence and cross-OS compatibility is high-effort to get right in greenfield work.

1 Like

Thanks @shanesveller

Not sure I can, but I’ll try - before that, just a disclaimer, I’m building it for my own use (and as a learning experiment) but wanted to see if others would find it useful - there’s also a whole lot that isn’t yet implemented.

I’ve used edeliver with Distillery and found it great to do what it is supposed to do. It sometimes fails in some cryptic ways - although the issue is not with it itself but something else (permissions, git, etc). I haven’t found clear ways to add to it or tweak its behaviour but that might just be my ineptitude. I wanted something that was written in elixir for legibility and with more of a “plugin” interface, where for instance you could define a target that does one type of build, with some hooks, and a type of deployment, while another target does a different kind of build and deploys also differently. I also wanted it to be buildable locally through docker as the default, but supporting other plugins (like perhaps a deploy for a target test is a push of the current thing to a CI or who knows what). I also wanted some interface that allows to cleanup the remotes etc and do that housekeeping from the command line.

For instance the deploy_and_build taks is just this:

defmodule Mix.Tasks.Deployer.BuildAndDeploy do
  use Mix.Task

  alias Deployer.Env

  alias Deployer.Helpers, as: DH
  alias DH.ANSI, as: AH

  @required_args [:target]

  @shortdoc "Builds & Deploys a Release"
  def run(args \\ []) do
    try do
      with(
        {_, %Env{} = ctx} <- {:load_ctx, DH.load_config(args)},
        {_, :ok} <- {:enforce_args, DH.enforce_args(ctx, @required_args)},
        {_, {:ok, n_ctx}} <- {:run_builder, Mix.Task.run(:"deployer.builder", ctx)},
        {_, {:ok, n_ctx_2}} <- {:run_deployer, Mix.Task.run(:"deployer.deploy", n_ctx)}
      ) do
        AH.success("Finished Build and Deploy")
        {:ok, n_ctx_2}
      else
        error ->
          AH.error("Deployer Error: #{inspect error}")
      end
    after
      DH.DETS.close()
    end
  end
end

And the builder task is

defmodule Mix.Tasks.Deployer.Builder do
  use Mix.Task

  alias Deployer.Release, as: Rel
  alias Deployer.Env
  
  alias Deployer.Helpers, as: DH
  alias DH.ANSI, as: AH

  @type valid_build :: {%Env{}, %Rel{}} | :halt | {:halt, atom, atom, list}

  @required_args [:target]

  def run(args \\ []) do
    try do
      with(
        {_, %Env{} = ctx} <- {:load_ctx, DH.load_config(args)},
        {_, :ok} <- {:enforce_args, DH.enforce_args(ctx, @required_args)},
        {_, {:ok, builder}} <- {:check_if_has_builder, DH.decide_builder(ctx)},
        {_, {%Env{} = ctx_2, %Rel{path: path} = res}} <- {:run_build, run_build(builder, ctx)},
        {_, :ok} <- {:ensure_gzip_exists, ensure_gzipped_file_exists(path)}
      ) do
        DH.DETS.add_release(res, ctx_2)
        AH.success("Builder completed")
        {:ok, ctx_2}
      else
        return_value -> check_return_value(return_value)
      end
    after
      DH.DETS.close()
    end
  end

  @spec check_return_value({:run_build, valid_build} | {:run_build, any()}) :: {:ok, %Env{}} | any()
  defp check_return_value(value) do
    case value do
      {:run_build, :halt} ->
        AH.success("Builder completed")
        :ok
        
      {:run_build, {:halt, m, f, a}} ->
        case apply(m, f, a) do
          {%Env{} = ctx_2, %Deployer.Release{path: path} = res} ->
            case ensure_gzipped_file_exists(path) do
              :ok ->
                DH.DETS.add_release(res, ctx_2)
                AH.success("Builder completed")
                {:ok, ctx_2}
              error ->
                AH.error("Couldn't find a finished file on path: #{inspect path}")
                error
            end
          {:ok, %Env{} = ctx_2} ->
            AH.success("Builder completed")
            {:ok, ctx_2}
          error -> {:error, error}
        end
      error ->
        AH.error(error)
        {:error, error}
    end
  end

  @spec run_build({atom, atom, list()}, %Env{}) :: valid_build | any()
  defp run_build({m, f, a}, ctx), do: apply(m, f, [ctx | a])

  @spec ensure_gzipped_file_exists(String.t) :: :ok | {:file_doesnt_exist, String.t}
  defp ensure_gzipped_file_exists(path) do
    case File.exists?(path) do
      true -> :ok
      _ -> {:file_doesnt_exist, path}
    end
  end
end

So it works similarly to a pipeline and you can build your own. Basically that would be the main difference (that’s yet unpolished code…)

Regarding Ansible, I haven’t used it, only chef (in like the ansible lookalike mode, where you deploy the chef sdk and the recipes/templates and then run recipes). It does that pretty well, so the only reason is to have sort of a uniform way to do it, in the same language and leverage some tools that are part of erlang, like the SSH module (in my tests using it, for instance in a 40 something sec total upload time, I found it to be 3 to 6 seconds faster to upload than scp if streaming the file and using the ssh write functions directly instead of write_file), and because you’ll be deploying releases it would be easier in the future to add additional capabilities to it, if it’s written in elixir - even build some sort of “app” that can run on the remote as a control point.

But sincerely - I’m not aiming to do all the functionality of something like ansible or chef right now at all. Just the basic stuff, replace systemd files, load environments, etc, so to have a solution that can do that stuff and be extendable. I also understand that the common “standard” is build docker containers, deploy to a CI and then spin an instance with that - but if it was pluggable there’s no reason why it couldn’t do those things or launch a docker compose test at the end of the build before deploying, etc.
Just some ramblings, does any of it sound like good reasons?