Preparing for Production

Hey all,

I’m reaching the point of deploying our application to production machines and I’m looking at how I inject secrets into my application.

1st: I see there’s {:system} syntax, but it’s use is scattered.

What’s the best way to inject variables to a build?

2nd: I’m using Docker Swarm, so secrets are provided through files: /run/secret/database_password

At the moment, I’m using a fork (https://github.com/Nebo15/confex/pull/6). Is there a better way?

Hopefully someone has deployed to Swarm before or has some idea on how best to handle this.

Thanks :smiley:

2 Likes

I don’t use Swarm, but your problem seems like a more general case where secrets must be provided when starting the app (as opposed to building the release), and they might need to be derived from various sources, not necessarily OS env. In my cases, I used etcd and custom json files to fetch stuff, so various OS env impros wouldn’t work for me anyway.

The way I deal with this (which doesn’t work for everything, but does for most things, including Phoenix and Ecto), is:

  1. In application start callback, prior to starting the top-level supervisor, I fetch secrets from wherever.
  2. I merge secrets into proper places in app env (e.g. repo or endpoint config).
  3. I start the supervision tree.
4 Likes

Thanks, @sasajuric. This seems like a good idea.

Would you just leave this blank, or not even defined?

config :app, database_password: "" 

Then in start, I could do File.ls |> Enum.each |> merge_config with a function for each parameter I expect?

Useful suggestion, thanks!

Would you just leave this blank, or not even defined?

I leave these settings undefined. In case you use those settings only in prod, I suggest defining them in dev and test only.

Well, in start you fetch those parameters from wherever they are, and then merge them, yes. AFAIK there’s no “merge env” function, so I do it by reading the top-level setting (say MyApp.Repo), adding additional info (e.g. password) into it, and putting it back with Application.put_env.

Awesome. Thank you very much for your help, it’s appreciated muchly :thumbsup:

Hi @sasajuric, do you have any examples of doing this? I’ve been trying for a couple of hours now.

get_all_env(app) returns a List of tuples, which is in-turn List of List and I have no idea how to “merge”

get_env(app, key) also returns a List and stumps me too.

A simple approach:

def merge(a, b), do: Keyword.merge(a, b, &resolve_conflict/3)
defp resolve_conflict(_key, v1, v2) do
    cond do
      Keyword.keyword?(v1) && Keyword.keyword?(v2) ->
        Keyword.merge(v1, v2, &resolve_conflict/3)
      :else ->
        v2
    end
end

This will recursively merge two keyword lists, taking the newer value when conflicts occur, except when the value is a keyword list, in which case it merges. This obviously is a bit of an over simplification, but improving it is an exercise left to the reader :wink:

2 Likes

Let’s say I want to set a repo password:

repo_settings = Application.get_env(:my_app, MyApp.Repo)
modified_repo_settings = Keyword.merge(repo_settings, password: super_secret_password)
Application.put_env(:my_app, MyApp.Repo, modified_repo_settings)

If you need to do this for a couple of different top-level settings, its worth making a helper fun which could be used as:

update_app_env(:my_app, MyApp.Repo, &Keyword.merge(&1, password: super_secret_password))

The implementation is left as an exercise :slight_smile:

Thanks again for everyone’s help! I’m getting close.

My “last” problem, I hope, is that this loading happens after KafkaEx starts, which is now failing due to missing configuration.

So far I’ve got the following:

def start(_type, _args) do                                                                                                                                                            
  App.Config.Loader.load_configuration() 
  ... super visor stuff
end
defmodule App.Config.Loader do
  require Logger

  def load_configuration() do
    with {:ok, files} <- File.ls("/run/secrets") do
      Enum.each(files, fn (file_name) ->
        load_configuration(file_name)
      end)
    end

  defp load_configuration(key = "api_hostname", hostname) do
    {:app_api, App.Api.Endpoint, [url: [host: hostname]]}
  end

  defp load_configuration(key = "secret_key_base", secret_key_base) do
    {:app_api, App.Api.Endpoint, [secret_key_base: secret_key_base]}
  end

  defp load_configuration(key = "json_web_token_secret", json_web_token_secret) do
    {:guardian, Guardian, [secret_key: json_web_token_secret]}
  end

  defp load_configuration(key = "elasticsearch_uri", uri) do
    {:tirexs, :uri, uri}
  end

  defp load_configuration(key = "elasticsearch_index", index) do
    {:app_api, :elasticsearch_index, index}
  end

  defp load_configuration(key = "kafka_hosts", host_string) do
    {:kafka_ex, :brokers, host_string
    |> String.split(",")
    |> Enum.map(fn(uri) ->
      [host, port] = String.split(":")
      {host, String.to_integer(port)}
    end)}
  end

  defp load_configuration(key = "youtube_api_key", youtube_api_key) do
    if ! is_nil(youtube_api_key) do
      {:tubex, Tubex, [api_key: youtube_api_key]}
    end
  end

  defp load_configuration(key, _) do
    Logger.warn("External configuration (#{key}) provided, but not loaded")
  end
end

Is it normal for applications to be started before my “primary”?

Applications should be loaded in dependent order, so if you depend on it then it will be up before yours. Libraries that are designed to be loaded after yours (should) give you a supervisor to place in to your own supervision tree so you can then do whatever you want to their loading, just like how Phoenix’s Endpoint works. :slight_smile:

but aren’t applications from my deps() now automatically handled? I don’t specify anything

OK. trying
runtime: false in deps and will add to Supervision tree manually …

If you do that it will not be included in releases.

The library itself really should already have a way I’d think though… Not heard of that library though.

Unfortunately, this is a quite nasty problem. Your runtime dependencies are started before your app, so the workaround I mentioned won’t work for an app which requires some config value during startup (in my defence, I did say it won’t work for everything :slight_smile:).

This is IMO a bad decision on behalf of library authors, but that conclusion doesn’t solve your problem.
The way I’d approach this myself is, I’d likely create a bash script which would fetch the secret from wherever and either perform a search/replace in sys.config, or set OS env, and see if I can make it work with distillery (which I think can be done, but not sure).

2 Likes

This is IMO a bad decision on behalf of library authors, but that conclusion doesn’t solve your problem.

The bad decision being to require configuration to start? I’d like to understand how I can prevent this with my own libraries, and how I can perhaps contribute to KafkaEx and avoid this problem for others.

KafkaEx is a client that speaks to a Kafka server, if the server isn’t defined when it’s started, then what would it do until it is?

Can’t I tell Elixir/mix that I’ll start the application myself? This does feel like a problem Elixir should be coping with, rather than looking to library authors; but perhaps this is due to my misunderstanding of it all.

It should just not start then, it should supply a supervisor that you would add to your own supervision tree with whatever options you want to give it.

It should not be an ‘application’ type library unless it is truly standalone, otherwise it should be a supervised type library.

3 Likes

Ah, I understand. That makes perfect sense. Thank you, @OvermindDL1 :thumbsup:

Should anyone find this thread through similar problems, I found the following:

#OTP API
def start(_type, _args) do
  max_restarts = Application.get_env(:kafka_ex, :max_restarts, 10)
  max_seconds = Application.get_env(:kafka_ex, :max_seconds, 60)
  {:ok, pid}     = KafkaEx.Supervisor.start_link(Config.server_impl, max_restarts, max_seconds)

    if Application.get_env(:kafka_ex, :disable_default_worker) == true do
      {:ok, pid}
    else
      case KafkaEx.create_worker(Config.default_worker, []) do
        {:error, reason} -> {:error, reason}
        {:ok, _}         -> {:ok, pid}
      end
    end
  end

I’m testing to see if disable_default_worker solves this :thumbsup:

1 Like

What @OvermindDL1 said. A good example of this is Ecto. It requires app env (which I dislike), but it doesn’t connect to the repo when it starts. Instead, you insert repo in your own supervision tree where/when you want to. Therefore, the trick I mentioned initially will work with Ecto, but not with KafkaEx.

The decision to start the connection in :kafka_ex application, as opposed to your application is IMO a bad decision in general (i.e. regardless of the configuration issue). It would be better if the connection process was in your supervision tree, so you can have the complete control of its lifecycle.

There is a thing called included applications. Try setting runtime: false, add the app to included_applications, and then start the app manually from your start callback.

I remember reading some criticism of included apps (though can’t remember what was it about), and my takeaway was that there are some non-obvious pitfalls there, so included apps should better be avoided. Didn’t really research this a lot, so maybe I’m wrong (perhaps someone else can chime in).

A clean solution IMO would be if KafkaEx would give you the start_link function in some module, which would allow you to start the connection when you want to. Since that would happen after your app start is invoked, you’d be able to set the app env as I’ve explained.

2 Likes