I have 2 long running Elixir applications. I have another phoenix application which acts like a dashboard for these 2 applications.
In the dashboard, I wanted to show the running status of these 2 applications. This is just to figure out all the components in the system are up and running.
From this dashboard, I also wanted to trigger some actions on these applications. To achieve this, I am thinking about doing the following:
- Each application exposes a HTTP API (planning to use
- Dashboard does HTTP requests to these APIs and display the status accordingly
- This way, actions can also be triggered on the remote process.
There are many advantages for this design. This is simple to implement, the processes that the dashboard wants to monitor can be in separate machines, etc.
I am wondering, is this the best approach to follow. Or is there a different approach which can be followed.
If you put both applications in a single umbrella application then you won’t have to communicate over an HTTP API. You could just send erlang messages for communication.
It is already in an umbrella project. If I understood you correctly, this what you are saying,
- There are 3 projects,
A is started on machine1 with a node name
a. This also starts a supervised process with a name registered globally
B is started on machine2 with a node name
b. This also starts a supervised process with a name registered globally
- Dashboard is started on machine3 with a node name
- When dashboard starts, it runs
b to form the cluster
- Dashboard sends message to
status-handler running on each connected node
Did I get that right?
Thanks for your help.
that would be my go to. however as you mention each app is on a separate machine a http api would not be a bad idea. particularly if those services need to expose a http api to other clients anyway.