What I mean is, the default interface of an Agent expects a lambda to be passed in at runtime to tell it what to do with its state. Most of the time that is encapsulated inside another module anyway, but it is different from a GenServer which requires you to write a new module, and perhaps specifically accept and execute lambdas if you explicitly want that behaviour for some reason.
I get your points here. The key issue here is that the processing/importing is done in parallel/concurrently over several processes. The agent here acts as the cache as well as a point of serialisation. The main issue is that I have e.g. âFocus Areasâ stored as strings inside the source data, but I am normalising them into a database table. Instead of executing an atomic âinsert if not exists and get me the IDâ query against the DB for every single record, I serialise this through this process which keeps track of which ones have already been inserted.
There is admittedly no reason why thereâs an advantage to this being an Agent instead of a GenServer though, I think. Perhaps Agents were an initial backlash against the âboilerplateâ of GenServerâinitially I also thought that the triplication of functions âjust to do one thingâ was annoying, but then I got used to it.
Maybe requiring people to understand whatâs going on inside the GenServer before they use it instead of allowing the âquick winâ of a simple interface which then leads to misuse would be better.