Giving a `nil` for `MyApp.Repo` configuration's `:pool` creates error

So, it seems like there is an issue where the value of the configuration for :pool in the Ecto configuration of a Phoenix application results in attempting to use nil as a module. You can minimally see this behavior by creating a new Phoenix application, setting the config :app_name, AppName.Repo, pool: nil and then attempting to create a simple migration.

I’m wondering if this is expected behavior and I’m just not understanding how to configure things correctly?

The issue I have with this is that I’d really prefer to manage all my configuration using Dotenvy and a runtime config. But because sending nil is treated as sending a value and doesn’t result in a default value being used, that breaks.

I can work around this by checking the environment and configuring things differently, but that feels pretty off to me. Is this just a thing to work around, or should I consider other approaches?

For anyone to be able to help you you need to provide the relevant parts of your config files where Ecto is being configured.

Also, prefer to past everything as text inside a code block, instead of using images.


if you’d like Ecto to pick up the default then the correct procedure is to leave the key out of your configuration instead of explicitly setting nil.

1 Like

@Exadra37, fair points, I let myself get sloppy being in a rush. To be clear, my question is not “how do I fix this?” so much as “why would this be the case?” We have a workaround, but I would like to better understand why this behavior surfaces, and what are the trade-offs of projects using configuration this way.

All that said, here’s a more in-depth coverage of the situation.

I would prefer to be able to configure the application using runtime.exs and having my App.Repo configuration to look as:

config :app, App.Repo,
  url: env!("DATABASE_URL", :string!),
  username: env!("DATABASE_USERNAME", :string!, nil),
  password: env!("DATABASE_PASSWORD", :string!, nil),
  hostname: env!("DATABASE_HOSTNAME", :string!, nil),
  stacktrace: env!("DATABASE_STACKTRACE", :boolean?, false),
  show_sensitive_data_on_connection_error: env!("DATABASE_SHOW_SENSITIVE", :boolean?, false),
  pool_size: env!("DATABASE_POOL_SIZE", :integer!, 10),
  pool: env!("DATABASE_POOL_MODULE", :module?),
    "#{env!("DATABASE_DATABASE", :string!, nil)}#{env!("MIX_TEST_PARTITION", :string?, nil)}"

What I would prefer to avoid is having to do conditional configuration of the Repo where I leave out entries. I had assumed that there was no semantic different between supplying a key with a value of nil vs not supplying the key at all. This appears to work with any number of different keys, but not with the pool entry.

I tried to track down why, and it’s a bit rough trying to locate exactly how and where the configuration is being read. I ended up finding a lookup function that is pulling data out of ets here.

The result of this is the semantics of “if the value is set, then it is probably the correct type” which feels like a weird assumption to make. I’m wondering if there’s a reason that I’m not thinking of that it might be preferred?

The other issue is that, at least for the minimal re-creation, the error message is pretty off:

07:42:55.708 [error] Could not create schema migrations table. This error usually happens due to the following:

  * The database does not exist
  * The "schema_migrations" table, which Ecto uses for managing
    migrations, was defined by another library
  * There is a deadlock while migrating (such as using concurrent
    indexes with a migration_lock)

To fix the first issue, run "mix ecto.create" for the desired MIX_ENV.

To address the second, you can run "mix ecto.drop" followed by
"mix ecto.create", both for the desired MIX_ENV. Alternatively you may
configure Ecto to use another table and/or repository for managing

    config :show_me, ShowMe.Repo,
      migration_source: "some_other_table_for_schema_migrations",
      migration_repo: AnotherRepoForSchemaMigrations

The full error report is shown below.

** (UndefinedFunctionError) function nil.checkout/3 is undefined
    nil.checkout(#PID<0.302.0>, [#PID<0.95.0>], [log: #Function<13.1785099/1 in Ecto.Adapters.SQL.with_log/3>, timeout: :infinity, log: false, schema_migration: true, telemetry_options: [schema_migration: true], repo: ShowMe.Repo, timeout: 15000, pool: nil, pool_size: 2])
    (db_connection 2.5.0) lib/db_connection.ex:1200: DBConnection.checkout/3
    (db_connection 2.5.0) lib/db_connection.ex:1525:
    (db_connection 2.5.0) lib/db_connection.ex:656: DBConnection.parsed_prepare_execute/5
    (db_connection 2.5.0) lib/db_connection.ex:648: DBConnection.prepare_execute/4
    (postgrex 0.17.1) lib/postgrex.ex:361: Postgrex.query_prepare_execute/4
    (ecto_sql 3.10.1) lib/ecto/adapters/sql.ex:438: Ecto.Adapters.SQL.query!/4
    (elixir 1.14.3) lib/enum.ex:1658: Enum."-map/2-lists^map/1-0-"/2

This is after doing the following:

mix show_me
cd show_me
# modify the config/dev.exs for ShowMe.Repo to have
# pool: nil
mix ecto.create
mix ecto.migrate

The stack trace makes this pretty clear, what’s happening, to be fair. Though the initial message can be quite confusing to people who are not as keen on reading stack traces when they’re given what appears to be a “blessed” explanation.

All this is to say, there are two obvious ways around this:

  1. You can conditionally configure based on the runtime environment(s).
  2. You can supply the same default to the configuration as it would supply.

In general, I would prefer option 2, but sorting out what that value ought to be can be somewhat challenging if the documentation is dense. It would be really preferable to be able to simply let a nil get through and expect that it behaves as I would have expected.

I bumped into this as well, in the exact same way that you did, the pool variable in runtime.exs.

Note that ‘nil’ is just an atom - try is_atom(nil) - exactly like the module atom of a pool that you would pass in; Elixir can’t easily see if any given atom is an actual ‘pool’ or not, it only knows if you’ve handed it an atom, and it tries to capture/call functions on that atom.

As you’ve noticed setting something to nil isn’t the same as the key being undefined; some functions know the difference, ie. Map.get conflates the two, but Map.fetch doesn’t.

I ended up with a function that would add a key and its value if the value was non-nil in my runtime.exs and I used it everywhere in that config

Map.get actually knows the difference as well it’s just that the default value is nil unless otherwise specified. But both functions detect whether a key is present or not and act according to their specification. None of them treat nil as a missing key. For example:

iex(3)> Map.get(%{a: nil}, :a, "test")

iex(4)> Map.get(%{a: nil}, :b, "test")

You need to remember that runtime.exs runs when your application is being booted while the other config files get their values at compile time, which may or not be the cause for your issue. For example, if a given :key value gets configured at compile time in Ecto and no further attempt is made to read it when your application boots then you get that nil you configured it with. So, I bet that the nil values is not replaced by the one in runtime.exs because the Ecto lib only gets it at compile time. Some functions in the modules of some libs are only invoked at compile time, which I discovered in the hard way, after spending hours debugging an issue of a runtime configuration not being applied.

This subtle difference between compile time and runtime configuration as been always a major source of confusion in Elixir, especially for devs coming from dynamic compiled languages.

The runtime.exs file is only used before the application is started, therefore a better name would be bootime.exs to reflect exactly when its really used, but as all we know naming is one of the hard things in our profession.

1 Like

The compile time/runtime difference is certainly one that I’m aware of, though I don’t think that I get it in a way that’s useful just yet. I keep messing it up in my own library experiments. Configuration is a tough spot for sure. In this case I believe that @jerdew is correct, Ecto is treating the atom it receives as a module without performing a nil check.

In our case, I don’t think the compile time values are the issue. When we inspect the value by digging into the Ecto source, we can set the default value returned by env!/3 to be what Ecto would use without the key being present. That works, which is nice.

What I think we’ll end up doing is configuring the defaults through our environment files for the time being. This achieves the goal I set out for originally, to have all configuration variables for a given environment in one source-of-truth file. It has the added benefit of beings something that we can store in k8s secrets which alleviates the installation of development secrets/configuration for onboarding. A new developer simply needs to get their role configured for cluster access and they can install the file.

Obvious downside is that we’re no longer depending on the libraries to handle defaults, so if they change then we’ll need to keep up with that. But it’s a tradeoff that feels worth it for now at least.

Much appreciate everyone’s thoughts on this!

1 Like

I also prefer to only use runtime.exs as much as possible, except for when it’s not possible.

I have proof of concept for a unrelated thing, but in the the docs you can read how I use runtime.exs: