The purpose of the question is to ask about, How you may handle things and what are the better options to solve my issue regarding ports monitoring.
I am building an application for myself, which will monitor the port of NVRs if the port is closed or not, Currently, we have 100 NVRs (Hikvision), The application built in Phoenix, Front, and back-end all.
We are going to monitor with frequency one per minute for now, and approximately, there are going to 100 queries to DB (INSERT) each minute. I am using PostgreSQL DB. and for checking if the port is open or not I am using
def port_open?(address, port) do
case :gen_tcp.connect(to_char_list(address), to_integer(port), [:binary, active: false], 1000) do
{:ok, socket} ->
:gen_tcp.close(socket)
true
{:error, _error} ->
false
end
end
in the above method as you can see its as simple as it is but in {:ok , socket} is there any information we can get from socket? that how much time it took or any information about the server to which we are testing for port.
Right my plan is to have such fields in table as
nvr_id :integer
status :boolean
done_at :DateTime
My question is that, as long as data will grow in the DB, we may have query timeouts or some issues with PostgreSQL, as In past when we have some millions of records in DB, the next INSERT queries never worked totally fine, most of the SSL TLS issue or sometime timeouts, Should I use the same DB or separate it from the actually project DB, or should I switch to Mongo or Rethink? Considering so much amount of data, and also every 1 minute Insert queries,
Also, It would be suitable to create a cron job to run every 1 minute for every NVR and do the whole process? I am trying to learn GenServer and OTP stuff, What path should I take so that I can use the concurrency of Elixir.






















