Recommended way to set env variables when using Ueberauth in development?

I am currently using Ueberauth to allow users to use third-parties to authenticate themselves. I can make the *_CLIENT_ID and *_CLIENT_SECRET available in my dev/test env with, for example:

export GOOGE_CLIENT_ID = "..."
export GOOGLE_CLIENT_ID = "..."

But how can I prevent having to export the variables with each new shell session?

Maybe I have taken a wrong approach to setting the env variables altogether? What is the recommended way to make them available to a dev/test environment?

Is it common to add a shell script somewhere? I see that Fly.io generated an env.sh.eex to export some variables. Should I do the same? Not sure how to run that before each compile, however…

For development, the answer is probably to use direnv.

Personally, I just make myself a shell script in the root directory that I call dev that has the following as a bare minimum.

#!/usr/bin/env bash

source .env

case $@ in
  c)
    iex -S mix
    ;;

  *)
    iex -S mix phx.server
    ;;
esac

Then just ./dev to start the dev server.

For production is dependent on how you deploy, of course.

2 Likes

This is great. Thank you.

For production I use Fly.io. You can set “secrets” with their cli and on their website.

Or you can create an additional config file that you will add to gitignore and import from dev.exs or test.exs.

2 Likes

Is that was dev.secrets.exs was/is?

I mean there is no convention, but yeah, I’ve seen this used in a few projects.

I’ve seen it around also.

I got the impression that it used to be generated in Phoenix apps in a previous version of the phx generator. See for example here: elixir - Why does Phoenix need a secrets config file for environment vaiables? - Stack Overflow.

But thanks again. :raised_hands:

1 Like

My 2c: I tent to prefer explicitness for env vars, rather than having things magically get loaded. I’ve seen and used this mechanism before:

export $(cat .env | xargs) && mix phx.server

direnv works well, but assumes folks have it set up (most do). The above is more explicit in my opinion. I don’t mind the verbosity, mostly due to me rarely having to type that out (always on a readme or in a shell history somewhere).

When working with others, there is usually something on the readme that will help (including yourself when you forget :slight_smile: ) As far as the mechanics:

  • There is usually a .env.example that is committed to the repo (example)
  • an entry for an .env file in .gitignore (example)
  • instructions to create your .env file in the readme (example)

If you don’t have a lot of env vars, I’d start with SOME_VAR=your_var mix phx.server and build from that. Then, consider the above if that sounds interesting.

1 Like

I don’t know why I never thought of this—it seems so obvious, lol.

The dev script still comes in handy, at least for one project, where I need to start up multiple nodes. Of course, Docker is probably the “real” answer there, but I’m stubborn. Maybe there is another way? Though this is getting a little off topic of purely ENV vars.

I use (and love) direnv, but for working with others, I find that dotenvy is a good solution with less friction than direnv.

It’s another dependency, but it gets rolled in with the others when you run mix deps.get. Plus it works in all environments, instead of only ones where direnv is installed (CI, shared dev boxes, etc.).

I’ve encountered the custom config.exs strategy mentioned by @D4no0, and I’m not a fan of the concept. Dotenv files are closer to a “universal” configuration system, so that’s a big plus for me. It allows me to share a config system between my Elixir project, my Docker Compose config, etc.

1 Like

I almost never start production docker images on my machine, but it is true that it is a pain to start them and I usually end up like an ape passing the env variables before docker command :sob: .

1 Like

I heard arguments for using Docker and Docker Compose for development, not just production.

For example here:

I don’t get the impression this is a common practice, however. Any idea why? Simply because it is extra effort for little gain?

I say “probably” because I tend to avoid Docker myself. I don’t have anything against it, I just spent a few years entirely disinterested in anything ops-related. That’s changing, however, so I will probably reacquaint myself. At a past job we used Docker for development (I just didn’t do any of the setup). I actually have no idea why anyone would think it’s a bad idea as it went quite smoothly for us. The only thing I can think of is that it’s memory intensive on macos so maybe with larger apps that becomes more of a thing. I have yet to watch the video you shared but I’ll add it to my queue.

1 Like

Why would you do that? Maintaining a readme of what should be installed and configuration is much more time efficient than losing time on recompilation, storage space and ram.

No idea. You’d have to ask the ops team at that job who mandated that :sweat_smile: I didn’t notice recomp times to be too bad but the whole thing was a Rube Goldberg Machine anyway so I always chalked up the slight slowness to that, but it wasn’t so bad. It can definitely make sense for people working on multiple projects using different version of Postgres, for example, though I’m never in that situation myself. But ya, it certainly didn’t feel necessary working on a company machine dedicated to working on one project.

1 Like

I heard the argument that using Docker in development can prevent bugs that are caused by differences in OS. And more generally the argument that the closer the development environment is to the production environment, the smaller the potential for issues when pushing to production.

I don’t really have an opinion about that argument myself. But I’m interested in the topic. Do these arguments make sense and are they valid in practice?

Usually if you have a dockerized prod, your CI that will run tests will be in docker containers too, so as long as those tests run successfully on CI, there should be absolute no problems.

I don’t think so, as long as your setup is to create a release, and you are using the same versions of elixir and OTP on prod as well as dev, everything will work the same.

The only real benefit I see is the fact that you can preinstall and configure some settings in a repeatable way, literally the fact why docker is useful in the first place, however in practice I’ve never had projects requiring anything more than installation of some local dependencies.

I did have some ecto bugs on the last project I worked on, setting up sandbox for testing. Those surfaced only in the docker image on CI, no idea what that was about, a possible explanation would be that we were using a old version of exqlite.

2 Likes

That’s a very convincing counter argument :sweat_smile:.

You can either run a Docker container with Elixir installed in the docker compose setup and share the project folder as a volume. In that case, you just run mix phx.server within the Docker container instead of locally on your machine (probably with an entrypoint script). So you don’t have to rebuild the image all the time.

But nowadays, I usually just have a docker compose setup for Postgres, Minio, or whatever services I need for development, and run the Phoenix server locally. That’s a bit less of a hassle.


If you need secrets in your local environment and you use a password manager with a CLI (like 1password or bitwarden), you can use that to set the environment variables. This has the advantage that you can just add a shell script to set those variables, and the script can safely be committed. Then you just have to make sure every team member has access to the shared vault.

3 Likes