I’m developing an application that’s going to be (for the forseeable future) installed and managed in a pretty boring, standard way — not using bespoke containerization, clustering, or anything like that.
I’ve currently got this code to manage SECRET_KEY_BASE, and it seems to work, but it also seems like a really messy hack:
# /config/runtime.exs
# …
if config_env() == :prod do
# …
secret_key_base =
System.get_env("SECRET_KEY_BASE")
|| Foo.Application.Util.app_secret_key_auto()
# …
end
# /lib/foo/util.ex
defmodule Foo.Application.Util do
# …
def app_secret_key_auto do
p = Path.expand("secret_key_base.txt", :filename.basedir(:user_config, "foo-app"))
# 1. Generate key if it doesn't exist
:ok = mkdir_p(Path.expand("..", p))
:ok = case File.open(p, [:write, :exclusive]) do # FIXME surely there must be some built-in tooling for this???
{:error, :eexist} -> :ok # happy path
{:ok, h} ->
case File.chmod(p, 0o600) do
:ok ->
# https://github.com/phoenixframework/phoenix/blob/v1.7.17/lib/mix/tasks/phx.gen.secret.ex#L17
data_ascii = (&:crypto.strong_rand_bytes(&1) |> Base.encode64(padding: false) |> binary_part(0, &1)).(64)
result = IO.write(h, data_ascii)
:ok = File.close(h)
result
{:error, e} ->
:ok = File.close(h)
{:error, e}
end
{:error, e} -> {:error, e}
end
# 2. load key
{:ok, data_ascii} = File.open(p, [:read], &Enum.fetch!(IO.stream(&1, :line), 0))
data_ascii
end
end
Is there any existing utility function or simple pattern that I should be using instead of this pile of spaghetti, or is Phoenix really just not meant to be used outside of cloud containers?
Hmm, generally I wouldn’t generate the SECRET_KEY_BASE if it doesn’t exist, I’d instead always pass it in via Env variables. Specifically these are the situations I’ve dealt with before that didn’t involve Kubernetes/Docker:
App deployed on Heroku - Set an env variable in the UI
App deployed on Render - Set an env variable in the UI
App deployed on my own server and managed with systemd - Set EnvironmentFile=/path/to/some/.env file
That way the only snippet you need is something like:
# /config/runtime.exs
# …
if config_env() == :prod do
# …
secret_key_base =
System.get_env("SECRET_KEY_BASE")
|| raise "SECRET_KEY_BASE is required and was not found"
# …
end
Release started manually env $(cat .env | xargs) ./bin/app remote
In the end system env is the complete opposite than catered to the cloud. You can provide those in a million ways and there’s likely one fitting your workflow – and it’s completely independent from the fact that you’re running elixir at all.
The snippet you posted is actually the stock behavior of the phx.new template; I edited it away from that because I don’t want my server to crash on startup just because the technician attempted a perfectly reasonable ZIP installation.
The app isn’t being deployed in any kind of fancy cloud service, either. Heck, right now it’s targeting Windows server (though we hope to move over to Linux at some point.)
I guess that I could bundle extra instructions with the application demanding that the installing technician fiddle with the registry to get basic functionality… but is there really not any established pattern for making Phoenix “just work” in the absence of a fancy managed environment? (And even on Linux, putting a systemd service that references an external file just passes the buck of generating that file…)
That task seems to be hard-coded to send the result to standard output. I actually linked to that line in the comment at the top the bit of my code that does what that task would.
What, specifically, would go wrong if I simply set :secret_key_base randomly on startup whenever the environment doesn’t specify it? Would everything work perfectly so long as the app isn’t being clustered/distributed/whatever, or would I be laying the seeds of—for example—a catastrophic Ecto failure next time the server restarts?
If the former, it seems like it would be pretty simple to raise if and only if it’s unspecified and the app is being run in a way that actually requires a non-arbitrary value for it… am I missing something?
Sessions are signed with the secret key. So all session cookies of user will become invalid, effectively logging them out if the were logged in and dropping any other stuff you might have stored in the session. Similarly csrf token given out (e.g. as part of forms) will become invalid, so users still having a form open won’t be able to successfully submit their forms. Phoenix.Tokens also reply on the secret if you happen to use those.
I see… so if I just randomize it on startup, then users authenticated based on session cookies will be logged out, any CSRF-protected flows will be interrupted, and any Phoenix.tokens will be ungracefully invalidated. Definitely not great.
Since keeping it stable is apparently so crucial to so much of Phoenix’s operation, what are the risks/drawbacks to just storing it in Ecto after automatically generating it on the first run?
I’m wondering if you’re not better off figuring out system env on windows over figuring out how to work around it. Releases support being installed as windows services, which based on the docs allow you to define additional system env values.
Hmm, one difficulty I’m running into there is that Ecto isn’t started during config-load time:
** (RuntimeError) could not lookup Ecto repo Foo.Repo because it was not started or it does not exist
import Config
require Ecto.Query
…
if config_env() == :prod do
database_path =
System.get_env("DATABASE_PATH")
|| Path.expand("foo.db", :filename.basedir(:user_data, "fooApp"))
secret_key_base = # https://elixirforum.com/t/managing-secret-key-base-without-kubernetes-docker-etc/67926?u=james_e
case System.get_env("SECRET_KEY_BASE") do
s when not is_nil(s) -> s
nil -> Foo.Util.get_or_insert_one_lazy!(
Foo.Repo,
Ecto.Query.from(s in Foo.Repo.Schemas.Secret, where: s.type == "secret_key_base"),
fn -> %Foo.Repo.Schemas.Secret{type: "secret_key_base", value: Foo.Util.phx_gen_secret} end
).value
end
…
config :fooApp, FooWeb.Endpoint,
…,
secret_key_base: secret_key_base
…
end
Is there any best practice for making parts of Endpoint config depend on values stored in Ecto, like that? If I were to start Ecto within this file, I’m pretty sure that’d break the supervisor structure, and I don’t see any obvious clean way to transform the phx.new template to do that. Maybe a separate :ignore process that runs immediately after Ecto.Migrator?
This seems like a LOT of hoops to jump through just to avoid the actual good practice of doing a rolling deploy. And if you can’t rely on an environment variable being present on boot, how are you going to connect to the database anyway? Is that not also configured via an env var?
Zooming out: your app has preconditions to booting. The correct thing to do when those preconditions are not met is to not boot, which allows the currently running instance to continue to serve traffic. The boot failure goes to your logs, which gives you alerts, and you fix it.
None of that is docker or container specific, that’s been the normal rolling deploy pattern for 20+ years now.
That’s stored in the application data directory, unless overridden by environment for some ad-hoc reason (such as being run in a deployed, managed, Linux-based or containerized environment).
I’d really like it if the application I’m writing did not offload these “hoops to jump through”, as you put it, to the installing technician.
The existing version of the application I’m writing a replacement for “just works”: you unzip it and run it, and it generates defaults for its own config and state files if needed, which is (in my experience using non Elixir based programs) an almost completely universal standard; are you telling me there’s no best practice pattern for implementing that with Phoenix?
The biggest part about this is getting the order of operations correct. config/runtime.exs is evaluated just after kernel/stdlib of otp are started. This is because you want it to be able to configure any other application (all your dependencies) and once applications are started they usually have already read their config from their application env. At the same time that means though that you cannot use any of your dependencies without them starting.
You can surely go ahead and figure out an order of starting up your dependencies so that you can do whatever custom configration needs you have while defering to start anything you want to configure through that configuration system – just like it would need to be done in any other application of any other language. This is not a simple thing to provide out of the box though. Nobody can anticipate what can be started for not needing additional configuration vs. the other half of your application needing additional configuration.
In your original post you said this is being managed and installed in a boring standard way. I took that to mean that there is still a company who has written software that they are deploying themselves, and would be using some sort of deployment tooling. This is what I would consider standard.
Is this software instead getting shipped to people who haven’t written it and need to run it and manage new versions / deploys manually?
I see… I guess that’s a bit awkward, then, that Phoenix seems hard-coded to expect that lump of application state in an API optimized for storing and loading config.
Mostly thinking out loud, then; I guess there’s a few ways to handle that:
Create the secrets during Migration, then figure out how to override FooWeb.Endpoint.start_link to defer setting that config item as a parameter,
Inject a new “Foo.Util.LoadPseudoConfig” task into the application startup sequence somewhere after Ecto.Migrator but before FooWeb.Endpoint, which pulls values like that out of Ecto and sticks them in the Application config;
Give up on handling this in software, and just write a buddy-doc SOP to defer the task to a human and force the org to consider more managed deployment going forward.
That’s not the case. Phoenix will merge config coming from the app env with config passed when starting the endpoint. If you want to you can ditch the app env completely and pass all the config via {MyAppWeb.Endpoint, config}.
If you need someone else to install the app without much thinking about configuration, then I think your existing solution is fine.
It provides an automatic and persistent value for SECRET_KEY_BASE, and places it right at the normal place for configuration.
As a source of inspiration, I suggest looking at how LiveBook handles config, because similar to your use case it provides this experience of “just works”.
Admittedly, having not read all the replies, there’s Config. Provider of you really need fine tuned control.
There’s also a lovely library I’ve used called Hush
EDIT:
Im not sure if it works well in runtime, but there’s also the new read_config which could be combined with hush to limit some of the extra calls to your Hush.Provider
If you want to pull config from the database, you have these options (off the top of my head – there might be more):
Finish app startup normally but schedule loading of config immediately after that (various good options to achieve it) and forbid any database operations until this initializer finishes (via f.ex. ETS);
Use Repo’s ability to call a function before each connect attempt (configuration via start_link options described here) and limit the pool_index to e.g. zero so it’s not ran for every allocated connection and check some sort of a global state (f.ex. in ETS) to establish whether whatever config you need from the DB has been already read and whether you should read it again – though maybe if you want runtime-reloadable config you should just leave this check intact and have it run on each connect attempt as originally intended in the API. This is slightly involved and I don’t like it but my former colleagues have used this technique successfully for similar purposes (in our case we wanted to be able to change database details while the app was running so the callback function we wrote was re-reading a file where the full PostgreSQL database URL was stored – this gave us the ability to migrate databases to new locations and have zero app downtime);
Have a throwaway single DB connection that you can start i.e. do Postgrex.start_link(...) inside Application.start, execute a bespoke hard-coded SQL query that fetches the config, and move on with life.
IMO we can’t be mad at the core team for not providing tight integration between the built-in Config API and the [technically] 3rd party library that is Ecto.
If you want quick-and-dirty, go for option 3.
If you want to do it as the Ecto maintainers intended, go for option 2 (also gives you the ability to have the said config be reloaded during runtime without requiring app restart).
I wouldn’t go for option 1 ever, but it’s technically still an option.