DBConnection.ConnectionError for regular queries, but migrations work

This is on the tail of Problem Creating PostGres schema in Ecto migration... and BONUS problems with queries – sorry for a double-post (things have evolved a bit).

I’m trying to get Ecto up and running on a new server and new database, and it’s been a rough ride.

Oddly, running migrations works fine – the commands that are run (successfully) boil down to this:

iex> config = [
  pool: DBConnection.ConnectionPool,
  pool_size: 50,
  database: "my_app",
  name: :migrations
  # ... etc...
]
{:ok, pid} = MyApp.Repo.PGRepo.start_link(config)
Ecto.Migrator.run(MyApp.Repo, :up, all: true, dynamic_repo: :migrations)
Process.exit(pid, :normal)

(Note: we use this command on prod because it’s a built release and we have to run migrations without mix)

But as soon as I try to actually run a normal query, everything fails:

iex> Ecto.Adapters.SQL.query!(MyApp.Repo, "SELECT VERSION()", [])

** (DBConnection.ConnectionError) connection not available and request was dropped from queue after 1094ms. This means requests are coming in and your connection pool cannot serve them fast enough. You can address this by:

  1. Ensuring your database is available and that you can connect to it
  2. Tracking down slow queries and making sure they are running fast enough
  3. Increasing the pool_size (although this increases resource consumption)
  4. Allowing requests to wait longer by increasing :queue_target and :queue_interval

See DBConnection.start_link/2 for more information

    (ecto_sql 3.8.3) lib/ecto/adapters/sql.ex:932: Ecto.Adapters.SQL.raise_sql_call_error/1

This one has really been dogging me. I thought the configs etc. would copy over from a working app, but I’m missing something. Any help or suggestions are appreciated.

1 Like

Have you set the schema search path, or prefixed the query? The symptoms don’t seem to relate to this but perhaps may be a contributor?

No, I haven’t. The query (even SELECT VERSION();) runs fine in the SQL IDE. I’m making comparisons to the database used by the older app, but I don’t spot the differences. In the other database, this is set in the first migration (from a file):

SELECT pg_catalog.set_config('search_path', '', false);

but I no such line is executed in the new database, so I’m unsure if it’s relevant.

Well, I’m not sure what I missed, but it had something to do with how with an AWS managed database, you don’t get the SUPERUSER login. I must of crossed wires somewhere with my setup, but I could never spot the error. Tearing the database down and re-creating it fresh fixed the problem.

Oh yes, that happened with me, twice, on my Mac, and it made me curse Postgres – or rather, the homebrew installation of it – a lot.

Postgres is great… until it isn’t. And tonight I was really missing the visibility that MySQL offered… even getting a dump from a PostGres instance or restoring it is a quite difficult by comparison. I’m discovering the caveats between pg_dump and pg_dumpall and getting quite frustrated that I keep having to drop into the psql shell to get good views on what’s going on… oiy.

1 Like

Postgres is a bit like Linux. They hyper-focus on some core functionality and ignore everything else, handwaving it away as “tooling that the community will invent”. That happened to both Linux and Postgres – they do have amazing community-invented tooling – but it can be a huge grind to find (or invent) the right tools for your use-cases.

I still keep around several bash scripts that do PG dump capture or restore, putting it in an incremental Borg backup repo (deduplication works really well there, I have achieved 52% compression on 20+ incremental PG dumps), add a new user and grant them full access to specified DBs, remove user, remove privileges of a user, re-setup SUPERUSER – the very thing you struggled with – and others.

As I am getting older I am getting weary and very sick of all this. I am quite good at it and have impressed most teams I have ever been a part of (which are sadly a big number; contracting is a tough area). But I kind of hoped the IT world at large will gradually inch their way forward to standardizing such higher-level tooling and maybe even merge it to the maintstream software (in this case Postgres).

Alas, everybody wants to still use:

  • curl, when xh is good enough for 95% of the cases, has colored output and is much easier to use;
  • bash and zsh, when oilshell, nushell and fish are much better in every perceivable way;
  • Baseline Linux whereas stuff like systemd and sometimes selinux make the experience much better (happily systemd is pretty prevalent these days);
  • less when you have tools like jless that you can directly pipe into from any downloader and have a CLI / TUI hierarchical view of the JSON…
  • …etc. to infinity.

Yes, right now I am still enjoying the career and money benefits of the fact that I am mercilessly practical and I invent my own tooling everywhere I go. Sure. But it’s getting super tiring. I hoped this would all be baseline tooling by now, as prevalent as the UNIX tools that people don’t want to part ways with (even though the normal e.g. gzip tool is single-threaded and always will be).

The fact that we are forever stuck with the lowest common denominator is what keeps boiling my blood and is likely the thing that will make me quit commercial programming forever.

Sounds like this hits close to home. Maybe you could publish your suite of tools and charge corporate rates to use them and then retire in infamy. Solidarity :fist:

1 Like

If only corporations didn’t take everything for free and then sue you for copyright after they steal your stuff. :003:

But maybe you’re right. I’ve been too employee-minded for a long time and this has to change.