Topic Background
Continued from
So, I think you are not starting a Repo?
Leading to:
I would suggest that you start an entirely new topic that focuses on helping you diagnose the issues with
https://github.com/evercam/evercam-server/blob/master/lib/evercam_media/repo.ex#L9
It must have had problems for some time now since exists?/1 was added over a year ago in order to add the capability of determining whether or not the repo was still responsive.
It’s configured to start up with the rest of the application here:
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(0.1), global_ttl: :timer.seconds(1.5), name: :snapshot_schedule]}, id: :snapshot_schedule),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(0.1), global_ttl: :timer.minutes(1), name: :camera_lock]}, id: :camera_lock),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(1), global_ttl: :timer.hours(1), name: :users]}, id: :users),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(1), global_ttl: :timer.hours(1), name: :camera]}, id: :camera),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(1), global_ttl: :timer.hours(1), name: :cameras]}, id: :cameras),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(1), global_ttl: :timer.hours(1), name: :camera_full]}, id: :camera_full),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.hours(1), global_ttl: :timer.hours(24), name: :snapshot_error]}, id: :snapshot_error),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.hours(2), global_ttl: :timer.hours(24), name: :camera_thumbnail]}, id: :camera_thumbnail),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.hours(2), global_ttl: :timer.hours(24), name: :current_camera_status]}, id: :current_camera_status),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.hours(2), global_ttl: :timer.hours(6), name: :camera_response_times]}, id: :camera_response_times),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.seconds(1), global_ttl: :timer.hours(1), name: :do_camera_request]}, id: :do_camera_request),
Supervisor.child_spec({ConCache, [ttl_check_interval: :timer.hours(2), global_ttl: :timer.hours(24), name: :camera_recording_days]}, id: :camera_recording_days),
worker(EvercamMedia.Scheduler, []),
worker(EvercamMedia.Janitor, []),
worker(EvercamMedia.StorageJson, []),
supervisor(EvercamMediaWeb.Endpoint, []),
supervisor(EvercamMedia.Snapshot.StreamerSupervisor, []),
supervisor(EvercamMedia.Snapshot.WorkerSupervisor, []),
supervisor(EvercamMedia.Snapmail.SnapmailerSupervisor, []),
supervisor(EvercamMedia.SnapshotExtractor.ExtractorSupervisor, []),
supervisor(EvercamMedia.EvercamBot.TelegramSupervisor, []),
the supervisor strategy being
worker(EvercamMedia.StorageJson, []),
supervisor(EvercamMediaWeb.Endpoint, []),
supervisor(EvercamMedia.Snapshot.StreamerSupervisor, []),
supervisor(EvercamMedia.Snapshot.WorkerSupervisor, []),
supervisor(EvercamMedia.Snapmail.SnapmailerSupervisor, []),
supervisor(EvercamMedia.SnapshotExtractor.ExtractorSupervisor, []),
supervisor(EvercamMedia.EvercamBot.TelegramSupervisor, []),
:hackney_pool.child_spec(:snapshot_pool, [timeout: 5000, max_connections: 1000]),
:hackney_pool.child_spec(:seaweedfs_upload_pool, [timeout: 5000, max_connections: 1000]),
:hackney_pool.child_spec(:seaweedfs_download_pool, [timeout: 5000, max_connections: 1000]),
]
:ets.new(:extractions, [:public, :named_table])
# See http://elixir-lang.org/docs/stable/elixir/Supervisor.html
# for other strategies and supported options
opts = [strategy: :one_for_one, name: EvercamMedia.Supervisor]
Supervisor.start_link(children, opts)
end
# Tell Phoenix to update the endpoint configuration
# whenever the application is updated.
:one_for_one
means that it would simply be restarted once it crashes.
Just a guess : the failures you have been witnessing may have happened shortly after the repo crashed but before it was restarted by the application. Basically scour your logs to collect any evidence that may reveal why the repo is behaving so erratically.