ECTO 3 - Postgres - Chunked insert_all disconnect

I’m testing a large insert_all with chunked data (too many parameters to handle a single insert_all).
After some total limit (around 20k), it completes but throws this error:

[error] Postgrex.Protocol (#PID<0.731.0>) disconnected: ** (DBConnection.ConnectionError) client #PID<0.1709.0> exited

Any suggestions?

Versions

Ecto latest
Elixir latest
Postgres 9.X (Azure)

repo.ex

  use Ecto.Repo,
    otp_app: :my_app,
    adapter: Ecto.Adapters.Postgres

config.exs

  config :my_app, MyApp.Repo,
    ...
    ssl: true

dev.exs

config :my_app, MyApp.Repo,
  pool_size: 8,
  timeout: 6_000_000,
  pool_timeout: 600_000

Code

data =
  Sheet.new_generate(subdomain, new_local_file, id, list_names, list_accepted)
_check =
  data
  |> Enum.count()
  |> IO.inspect()
_total =
  data
  |> Enum.chunk_every(@qty)
  |> Enum.map(fn x ->
    Repo.insert_all(
      PeopleTarget,
      x,
      [
        on_conflict: :replace_all,
        conflict_target: [:list_id, :email],
        prefix: "tenant_#{subdomain}"
      ]
    )
  end)

PS: QTY = 2_500. And I can assure that all rows are unique (Enum.uniq_by) and that insert finished (rows = test count)

1 Like

fyi: pool_timeout no longer works on Ecto 3, see https://github.com/elixir-ecto/ecto/issues/2833#issuecomment-440400022

1 Like

Thanks. I’ll look.

But, official docs still list them as options Ecto

@josevalim

Yeah we’re working on updating that.

Done!!!

2 Likes