I thought I’d give Flow
a spin.
The documentation tutorial example worked fine so I thought I’d try something that was wrapping GenServer
calls in a Flow.map
.
Given my understanding, Flow is suited for doing parallel and concurrent computations over collections.
I have a bunch of computation tokens
which I’m using for the collection vector.
Inside a Flow.map
function, I set up a computation which under the covers happens to be a bunch of GenServer
calls.
So given a word vector like
["COMMUNISES", "HAMMERLOCK", "CARPED", "ESTERIFICATION", "RIVERWARD", "TABLATURES", "COALESCED", "FRISKING", "MODERNISTIC", "NONCONTACTS", "SCHISTOSOME", "WOODWINDS", "PRISTANE", "UPSTROKE", "CLUBBING", "MALPRACTITIONER", "BOMBYCIDS", "HEADLINER", "ODIUMS", "UNVARYING"]
I’m trying to score the words based on some heuristics. (Let’s leave it at that).
So I get back some score results e.g.
(COMMUNISES: 6) (HAMMERLOCK: 5) (CARPED: 6) (ESTERIFICATION: 4) (RIVERWARD: 8) (TABLATURES: 5) (COALESCED: 4) (FRISKING: 11) (MODERNISTIC: 4) (NONCONTACTS: 6) (SCHISTOSOME: 3) (WOODWINDS: 8) (PRISTANE: 5) (UPSTROKE: 5) (CLUBBING: 25) (MALPRACTITIONER: 2) (BOMBYCIDS: 7) (HEADLINER: 7) (ODIUMS: 6) (UNVARYING: 8)
Observation 1) I wasn’t sure if it would work being that under the cover things are not as deterministic as the Letter counting example in the docs, but once I gave it a try it works sequentially atleast (I have a dual core). Is Flow intended to have GenServer calls in the map functions? I’m guessing no? When I was looking at my output the computation was happening in the order of the word token so the computation for HAMMOCK occurred before FRISKING.
Observation 2) I found that when I used Flow.partition
, I was not able to get scores for all my word tokens, as some were dropped in the Flow.reduce
somehow. The reduce map got updated but then the map got lost and another map became the current map, losing some of the results. However, when I removed the call to Flow.partition
it worked perfectly. So 20 tokens got 20 scores, whereas with the partition call included it was 20 tokens and 16 scores, for example…
Curious if what I’m observing is in line with the way Flow
was intentioned