Is it an OK solution to have more than one Repo module pointing to the same database?
In my system, data is inserted into the database or read in large quantities via some internal processes that are independent from clients.
This means that a process (or processes) will pool some data from third-party APIs, process it, and insert it into the database.
On the other side, clients (users that access my system via the Phoenix endpoint) can only read these data, but never write.
The issue I’m having is that since these processes can use the database a lot and at the same time, I normally get timeout errors.
The obvious solution is to increase
:queue_interval. I can do that and it seems the correct fix since I do expect these processes queries to take a long time, I do not mind if they block other processes for a long time too.
The issue is that they will block users trying to read the database too, since they will use all the available Repo pools.
So, what I have in mind to fix that is to have 2 Repos for the same database, one read-only that is only used by clients, and other read-write that is used by my internal processes.
That way I think I can make the processes Repo have a very large
:queue_interval so they can take their sweet time to finish their work. And, at the same time, I have another Repo for the clients so they will always have available pool connections to use and I can use the default sane
Is this a good solution for this problem? Would you tackle it in another way?