Supervisor/Processes and API Clients

I am building some API Client

I have two concerns on what I am trying to do.

First Concern: Process per API call

I would like to know if it’s a good practice create a GenServer instance per API call. I am worry of the limit of those processes and if I should be doing that or not.

The reason I would like it to do is that I can retry that API call and stuff like that, also if that process hang or fail I don’t block the other API calls.

Is that a good idea?

I am guessing that I should kill that process after it finish every single time

Second Concern: Process per Client

Right now I have this but the problem with that is that Bittrex credentials allow me to do actions only to the account that created the credentials. Right now I want to be able to add either a Supervisor that will have the Workers per Client that hold the credentials, or at least allow the programmers to add the same Worker by themself.

The reason I am targeting to do keep that Client active is that I will need to do some pool of the data so no everything will be triggered by some user activity.

But again, what are your thoughts about it


Processes are cheap, but keep basic calls at the module level and only introduce more processes where you need concurrency, like a pool for API calls (like with poolboy or so) is usual.

1 Like

@OvermindDL1 but could you retry?

and what about the second concern? I am worry that I will have to hang a lot of processes running because that is based on how many users are using the platform.

You could have an api caller retry however much you want before it gives up and returns to the pool anyway. Plus you could even aggregate multiple API calls with the same args as a single pool pull to return them all at once. :slight_smile:

You’ll run out of hardware resources before that process count becomes a problem. ^.^

1 Like

I wish to understand what you are talking about but I can’t. Could you point me to some article or anything that will give me the knowledge please?

returns to the pool anyway

Because poolboy works with a limited number of workers, when they finish their job, they “return to the pool”. It helps working with a limited set of resources, like db access, api call etc. Think of it as a funnel.

aggregate multiple API calls with the same args as a single pool

What is hard In Erlang/Elixir is to think concurrently. For each task that could run indenpendantly You should try to run them in parallel, with a collector in charge of aggregating the results. So one worker of the pool can spawn multiple processes, usually Task

I have got an exemple in this post using poolboy for scraping

1 Like