Experimental.Flow experiences

I thought I’d give Flow a spin.

The documentation tutorial example worked fine so I thought I’d try something that was wrapping GenServer calls in a Flow.map.

Given my understanding, Flow is suited for doing parallel and concurrent computations over collections.

I have a bunch of computation tokens which I’m using for the collection vector.
Inside a Flow.map function, I set up a computation which under the covers happens to be a bunch of GenServer calls.

So given a word vector like

["COMMUNISES", "HAMMERLOCK", "CARPED", "ESTERIFICATION", "RIVERWARD", "TABLATURES", "COALESCED", "FRISKING", "MODERNISTIC", "NONCONTACTS", "SCHISTOSOME", "WOODWINDS", "PRISTANE", "UPSTROKE", "CLUBBING", "MALPRACTITIONER", "BOMBYCIDS", "HEADLINER", "ODIUMS", "UNVARYING"]

I’m trying to score the words based on some heuristics. (Let’s leave it at that).

So I get back some score results e.g.

(COMMUNISES: 6) (HAMMERLOCK: 5) (CARPED: 6) (ESTERIFICATION: 4) (RIVERWARD: 8) (TABLATURES: 5) (COALESCED: 4) (FRISKING: 11) (MODERNISTIC: 4) (NONCONTACTS: 6) (SCHISTOSOME: 3) (WOODWINDS: 8) (PRISTANE: 5) (UPSTROKE: 5) (CLUBBING: 25) (MALPRACTITIONER: 2) (BOMBYCIDS: 7) (HEADLINER: 7) (ODIUMS: 6) (UNVARYING: 8)

Observation 1) I wasn’t sure if it would work being that under the cover things are not as deterministic as the Letter counting example in the docs, but once I gave it a try it works sequentially atleast (I have a dual core). Is Flow intended to have GenServer calls in the map functions? I’m guessing no? When I was looking at my output the computation was happening in the order of the word token so the computation for HAMMOCK occurred before FRISKING.

Observation 2) I found that when I used Flow.partition, I was not able to get scores for all my word tokens, as some were dropped in the Flow.reduce somehow. The reduce map got updated but then the map got lost and another map became the current map, losing some of the results. However, when I removed the call to Flow.partition it worked perfectly. So 20 tokens got 20 scores, whereas with the partition call included it was 20 tokens and 16 scores, for example…

Curious if what I’m observing is in line with the way Flow was intentioned

1 Like

Do you have a github project that can be pulled down to test it ourselves?

1 Like

​Thanks for taking a look

My project is on Bitbucket but private. Let me give you access to it

This word scoring example is actually a Hangman game.

For brevity purposes it was easier to describe this way, so when you look at the
code, don’t be surprised to see the Hangman logic :).

lib/hangman/web_collator.ex - line 68 is the commented out reference to Flow.partition

The unit test I have been using to see this is mix test test/hangman/web_test.exs:42.

This generates 20 random word tokens (hangman secrets) via a plug interface.

This is part of my IO.puts output when Flow.partition is included in web_collator.ex…

The scores acc (accumulator) has two versions, then at the end, the one
starting here with VENOSE is used for a total of only 12 words out of 20.

This all takes place at the end of the program output.

scores acc is " (VENOSE: 8) (DISLIKES: 5)"
scores acc is " (VAGABONDAGE: 2) (VATUS: 25)"
game_key, scores are {“melvin”, 3}, " (REFOCUSSES: 4) (PREHOMINID: 4)"
game_key, scores are {“melvin”, 4}, " (ANTIPASTOS: 5) (ORACULAR: 4)"
scores acc is " (VENOSE: 8) (DISLIKES: 5) (REFOCUSSES: 4) (PREHOMINID: 4)"
scores acc is " (VAGABONDAGE: 2) (VATUS: 25) (ANTIPASTOS: 5) (ORACULAR: 4)"
game_key, scores are {“melvin”, 5}, " (RESCINDER: 4) (GRIVET: 9)"
game_key, scores are {“melvin”, 8}, " (WASPISHNESS: 2) (CHAPPED: 10)"
scores acc is " (VENOSE: 8) (DISLIKES: 5) (REFOCUSSES: 4) (PREHOMINID: 4) (RESCINDER: 4) (GRIVET: 9)"
scores acc is " (VAGABONDAGE: 2) (VATUS: 25) (ANTIPASTOS: 5) (ORACULAR: 4) (WASPISHNESS: 2) (CHAPPED: 10)"
game_key, scores are {“melvin”, 6}, " (BELLMEN: 8) (PROCEDURALLY: 5)"
game_key, scores are {“melvin”, 9}, " (MISTYPES: 7) (FISHPOLE: 9)"
scores acc is " (VENOSE: 8) (DISLIKES: 5) (REFOCUSSES: 4) (PREHOMINID: 4) (RESCINDER: 4) (GRIVET: 9) (BELLMEN: 8) (PROCEDURALLY: 5)"
scores acc is " (VAGABONDAGE: 2) (VATUS: 25) (ANTIPASTOS: 5) (ORACULAR: 4) (WASPISHNESS: 2) (CHAPPED: 10) (MISTYPES: 7) (FISHPOLE: 9)"
game_key, scores are {“melvin”, 7}, " (FLIPPED: 25) (SUITED: 10)"
scores acc is " (VENOSE: 8) (DISLIKES: 5) (REFOCUSSES: 4) (PREHOMINID: 4) (RESCINDER: 4) (GRIVET: 9) (BELLMEN: 8) (PROCEDURALLY: 5) (FLIPPED: 25) (SUITED: 10)"
game_key, scores are {“melvin”, 10}, " (KNEEHOLE: 3) (OVERPUMPED: 8)"
scores acc is " (VENOSE: 8) (DISLIKES: 5) (REFOCUSSES: 4) (PREHOMINID: 4) (RESCINDER: 4) (GRIVET: 9) (BELLMEN: 8) (PROCEDURALLY: 5) (FLIPPED: 25) (SUITED: 10) (KNEEHOLE: 3) (OVERPUMPED: 8)"
HTTPoison.get http://127.0.0.1:3737/hangman?name=melvin&random=20 gives: " (VENOSE: 8) (DISLIKES: 5) (REFOCUSSES: 4) (PREHOMINID: 4) (RESCINDER: 4) (GRIVET: 9) (BELLMEN: 8) (PROCEDURALLY: 5) (FLIPPED: 25) (SUITED: 10) (KNEEHOLE: 3) (OVERPUMPED: 8)"

This is in contrast to the omitted call to Flow.partition which correctly produces these 20/20 results

game_key, scores are {“melvin”, 10}, " (SPECTROMETRIC: 2) (OXIDOREDUCTASES: 2)"

scores acc is " (JOLLITY: 25) (PEMICANS: 7) (PALPITATION: 5) (UNSILENT: 6) (SUPERPROFITS: 4) (GERUNDIVE: 6) (PILEATE: 7) (OVERAWES: 8) (TUSSORS: 6) (ENDARTERECTOMY: 1) (NONADDITIVE: 3) (WAIVE: 25) (MACHINEABILITY: 4) (COURANTO: 6) (NONOCCUPATIONAL: 4) (SLATED: 7) (REMARKET: 6) (BRACTLET: 6) (SPECTROMETRIC: 2) (OXIDOREDUCTASES: 2)"

HTTPoison.get http://127.0.0.1:3737/hangman?name=melvin&random=20 gives: " (JOLLITY: 25) (PEMICANS: 7) (PALPITATION: 5) (UNSILENT: 6) (SUPERPROFITS: 4) (GERUNDIVE: 6) (PILEATE: 7) (OVERAWES: 8) (TUSSORS: 6) (ENDARTERECTOMY: 1) (NONADDITIVE: 3) (WAIVE: 25) (MACHINEABILITY: 4) (COURANTO: 6) (NONOCCUPATIONAL: 4) (SLATED: 7) (REMARKET: 6) (BRACTLET: 6) (SPECTROMETRIC: 2) (OXIDOREDUCTASES: 2)"

Ah was hoping for an SSCCE, but let me look. :slight_smile:

Uh, first your text is asking for input (which is then dying as it is not connected to an input here), tests should be fully self-contained. :slight_smile:

When I run the test for 20 tokens mix test test/hangman/web_test.exs:42 it is success. If I uncomment the Flow.partition call in web_collator.ex it also shows success, where is it failing?

These tests appear to be full integration tests. I would highly recommend building a lot of unit tests to test specific parts of this and drill down into what is or is not functioning. :slight_smile:

I might be able to look at this tonight, but too busy at work for the moment. If you can reduce it to an SSCCE that would be best, although I’d start with making new unit tests to test individual functionality, even down to the individual lines to ensure that the output is what I expect given a reduced set of inputs. :slight_smile:

Let me get back to you then.

The unit test coverage isn’t fully complete. But for this Flow.partition omission one should be easy to add.

Just updated my Linux kernel and wireless isn’t working so I’ll get back to you.

I’ve stubbed out a large portion which does a lot of GenServer calls, reduced the game debug output, also limited it to 3 secret tokens as it mimics the behavior of 20 ;).

DISCLAIMER: If the setup is still not ideal in terms of SSCCE, I wouldn’t bother wasting too much of your time. I posted this more casually to see if my original questions were known issues and what not.

That said, please see the test outputs below:

game_key, scores are {“rabbit”, 2}, " (ERUPTIVE: 5)"

game_key, scores are {“rabbit”, 1}, " (CUMULATE: 8) (AVOCADO: 6)"
scores acc is " (ERUPTIVE: 5)"

scores acc is " (CUMULATE: 8) (AVOCADO: 6)"

  1. test single test of 3 secrets for use with stub Pass, success when Flow.partition is commented out (Hangman.Web.Collator.Test)
    test/hangman/web_collator_test.exs:29
    Assertion with == failed
    code: output == " (CUMULATE: 8) (AVOCADO: 6) (ERUPTIVE: 5)"
    lhs: " (CUMULATE: 8) (AVOCADO: 6)"
    rhs: " (CUMULATE: 8) (AVOCADO: 6) (ERUPTIVE: 5)"
    stacktrace:
    test/hangman/web_collator_test.exs:35: (test)

And with just a commenting out of Flow.partition I get a sucessful test

game_key, scores are {“rabbit”, 2}, " (ERUPTIVE: 5)"
scores acc is " (CUMULATE: 8) (AVOCADO: 6) (ERUPTIVE: 5)"

Finished in 0.2 seconds
3 tests, 0 failures, 2 skipped

I’ve checked in the failing test version setup to make it easier. Please just use the test/hangman/web_collator_test.exs:29 since it is specifically working with the Stub.

(PS. Please ignore the handler test that is asking for input for now)

Thanks again for taking the time out to fiddle with this @OvermindDL1 !

Because your batch size is too small, everything is originally handled by a single process. When you call Flow.partition, then the data is routed through different process, that’s why you have the impression that “another map became the current map, losing some of the results”. However the data is not being lost, that’s exactly the goal of partitioning, to split the data through different processes so you have concurrency.

Also keep in mind that, If you are using flow to call a single GenServer, then the GenServer becomes the entity doing all the work and you don’t get to leverage any parallelism. So Flow isn’t really adding much (specially for such small data size).

1 Like

Thanks as always Jose. I’ve digested your comments but frankly I’m still scratching my head.

I guess my expectation then is that if the data is not being lost but handled through a differenct process(es) so you have concurrency, wouldn’t they be re-integrated via the reduce step. The IO.puts data I’m referencing with the scores acc is from the reduce step.

It makes sense when you make the case that if the flow is calling a single GenServer it will hence block – so Flow isn’t adding much since it is bounded by the lowest common denominator so to speak. But my understanding is for each Flow.map call – a unique worker and unique server GenServer pair are spun up so it should not be blocking. I should triple-check this again.

Cheers

Scratch this, I wasn’t able to prove this

Scratch my previous reply. It is not clear if there is any GenServer block if any. I stubbed the last major GenServer block with a simple module and it is still behaving the same. Meaning with a second stub in place, the processing is still sequential… (Ignoring Flow.partition for now…)

So here’s my question:

If you have an async unit test with two test cases. Should the processing of the two test cases be interleaved?

I have an async unit test, but the IO.puts output is always sequential. Is this the result of something buffering related? Is this a good test to ensure if things are running in parallel here?

top with 1 shows both cores working well, but neither are near full utilization as the idle rates are quite high – perhaps due to setting up GenServers dynamically rather than having a pool of them already to go.

Regardless, Flow seems to be working fine and I’m not using the Flow.partition call.

Cheers

I guess my expectation then is that if the data is not being lost but handled through a differenct process(es) so you have concurrency, wouldn’t they be re-integrated via the reduce step. The IO.puts data I’m referencing with the scores acc is from the reduce step.

No, they are not reintegrated via the reduce step. The reducing operation is running on multiple processes at the same time, concurrently. You don’t have a single state at the end but multiple states.

But my understanding is for each Flow.map call – a unique worker and unique server GenServer pair are spun up so it should not be blocking. I should triple-check this again.

There is probably a misunderstanding here. I understood that you were calling a shared GenServer from your map step. If that’s not the case then the map operation should run fully in parallel assuming you have enough data (or a small batch size).

Is this a good test to ensure if things are running in parallel here?

Call IO.inspect(self()) inside Flow.map/2. If it is always the same value, then you have only one stage calling map.

Tests in the same test case do not run concurrently.

How much would you say is “enough data”? Its only showing 1 pid in Flow.map during the unit test case. However, when I run the suite of all unit tests, I have seen the test run in parallel with other tests and even seen the pid value change for a good while before it settles on one value.

PS. This is above my head why this is occuring when I see 1 pid listed when I run the test case and then during the suite of tests why this pid changes then eventually settles on a single value.

There is a section in the Flow documentation that talks about this,
including the configuration max_demand you should set to get parallelism
for smaller data sizes, such as:

Flow.new(max_demand: 2)
|> ...

will send the initial data in batches of 2 and request more data on every event.

1 Like

Thank you for highlighting this, I will play with this then

Configuration (demand and the number of stages)
Both new/2 and partition/3 allows a set of options to configure
how flows work. In particular, we recommend developers to play with
the :min_demand and :max_demand options, which control the amount
of data sent between stages. The difference between max_demand and
min_demand works as the batch size when the producer is full. If the
producer has less events than the batch size, its current events are
sent.
If stages may perform IO computations, we also recommend increasing
the number of stages. The default value is System.schedulers_online/0,
which is a good default if the stages are CPU bound, however, if stages
are waiting on external resources or other processes, increasing the
number of stages may be helpful.

The good news: The max_demand did the trick! - both my cores were up to 98% at times and the pids alternated. The processing time is nearly cut in half…

The only problem is I’m getting half the results at the end (even though they are being computed). So instead of 200 word scores I’m getting 100. That other acc was not reintegrated into the total. So the acc list that has NONCONFORMISMS in the beginning is not part of the final value.

(This is without the call to Flow.partition)

key: {“typhoon”, 19}, key scores " (VISCOUNTESSES: 4) (YAUPS: 25) (XANTHATES: 4) (DERIVATIZATION: 4) (PHOTOEMISSIVE: 2) (BEACHGOERS: 6) (SIXTIES: 5) (DJELLABAH: 5) (BILLOWIEST: 5) (SALTBUSH: 5)"

Not seeing this acc list in the final. This is when key {“typhoon”, 19"} is being referenced in the reduce phase

key: “typhoon” scores acc: " (NONCONFORMISMS: 5) (FLORILEGIA: 4) (SISKINS: 6) (BRAIDS: 10) (BARLESS: 9) (CATTED: 8) (SUNBATHING: 7) (ASPERSORS: 3) (CONVOLUTED: 6) (BAGUETS: 9) (DOWELED: 4) (RECONCILABILITY: 4) (DEDUCTIONS: 6) (SORRINESS: 7) (BILLOWIEST: 5) (TOUCHIEST: 7) (CRYSTALLIZED: 6) (AIRBORNE: 4) (WONTS: 25) (BAMBOOZLEMENTS: 4) (GRACING: 7) (MIGNONETTE: 2) (BEDLIKE: 8) (LAIRDS: 7) (ENCHANTRESS: 2) (ROMANTICISTS: 9) (WARBLING: 9) (SEQUINNED: 5) (CEREBROSIDE: 3) (SUNBATHING: 7) (DISABILITIES: 4) (BAYONETED: 4) (WARNINGLY: 8) (TORTILLA: 5) (PEJORATIVES: 7) (REMAPPING: 11) (MORATORY: 7) (CUBIST: 8) (HYDROGRAPHIC: 4) (POLYHYDROXY: 4) (UNFERTILE: 5) (RAPTURE: 5) (ACTINIAE: 3) (ASPHERICAL: 4) (SCALPER: 7) (PLASHER: 8) (PUMELOS: 5) (KITES: 25) (WITNESSING: 5) (ACEQUIA: 7) (GLORIFIER: 5) (ENTRAPS: 6) (ENTIRENESS: 2) (PITCHERSFUL: 5) (WHERVE: 6) (FLEISHIG: 3) (DIABOLOS: 6) (POTENTATE: 6) (REARRANGING: 6) (STRICTURE: 5) (VIOLATOR: 5) (OUTBLAZED: 6) (DISINVITED: 3) (RELICS: 8) (WIDEAWAKES: 5) (ZANINESSES: 5) (ETHYNYLS: 6) (GERMANDERS: 4) (UNGRATEFULLY: 6) (REPEATED: 7) (CHERUBLIKE: 5) (OSMETERIUM: 3) (TYPOLOGICALLY: 6) (SAMBARS: 5) (SHOPHROTH: 6) (ACIDNESS: 7) (LITIGIOUSNESS: 5) (STROBILATIONS: 6) (PROFESSIONAL: 4) (SUBVERTING: 6) (OSMETERIUM: 3) (STRICTURE: 5) (OUTSPEAKS: 6) (ZEBROID: 7) (DIRECTIONS: 5) (LOBSTERING: 6) (TERRITORY: 6) (VEGANS: 8) (SUNFAST: 4) (ZAZENS: 25) (VISCOUNTESSES: 4) (YAUPS: 25) (XANTHATES: 4) (DERIVATIZATION: 4) (PHOTOEMISSIVE: 2) (BEACHGOERS: 6) (SIXTIES: 5) (DJELLABAH: 5) (BILLOWIEST: 5) (SALTBUSH: 5)"

12:13:54.368 module=Hangman.Player.Worker [info] Terminating Player Worker #PID<0.432.0>, reason: :normal

This is when key {“typhoon”, 20"} is being referenced in the reduce phase. Notice the acc list is different from the one above.

key: {“typhoon”, 20}, key scores " (MANGANIC: 8) (INFUSIBLE: 5) (EMPTIEST: 4) (APOTHECIAL: 4) (FLEXURES: 5) (CONDENSATION: 7) (GUMMATA: 4) (KIWIFRUIT: 5) (UNRULED: 25) (MARRIER: 10)"
key: “typhoon”, scores acc: " (GOODLIER: 7) (BEACHGOERS: 6) (TABOURER: 6) (TRACHEOSTOMY: 3) (PREFORMULATE: 3) (COCKNEYS: 8) (VEXILLA: 7) (SERIALIZATIONS: 8) (FORMABILITY: 7) (LAMELY: 5) (CANVASSER: 3) (STROLLER: 8) (UPBIND: 9) (GERMANIUM: 4) (JOLLIFICATION: 6) (EXTERNALIZE: 4) (TRUSTEESHIP: 2) (TYPISTS: 6) (PRUNERS: 6) (KILTED: 25) (INTEGRALLY: 5) (CANTILENA: 3) (LUCKIEST: 9) (BOLIVIANOS: 5) (ANTENNULAR: 5) (CRYPTOGAMOUS: 4) (CANVASSER: 3) (SYSTEMICS: 6) (ARGENTITES: 5) (PHOTOTYPESETTER: 2) (BIGGIE: 8) (MONOCLINE: 5) (OVERHUNTS: 8) (BURBLING: 12) (STAPEDECTOMY: 3) (FORMABILITY: 7) (BOLTROPES: 7) (GLADDING: 7) (CADENCES: 6) (THERMOSPHERES: 4) (STOREKEEPER: 3) (OUTHUMORS: 6) (NUCLEOPROTEINS: 2) (LOCATES: 7) (INTERABANGS: 4) (CHIVARI: 5) (MUTINOUSNESS: 5) (HOBBITS: 9) (BROOK: 9) (EMBROWN: 7) (UNRULED: 25) (BARBULE: 6) (ENWREATHE: 3) (INDIVISIBLES: 2) (DINOSAUR: 6) (ROUGHDRIES: 6) (UNENTHUSIASTIC: 3) (MISSORTED: 7) (ANTHRACITE: 3) (TETRAHEDRAL: 4) (ENLARGED: 6) (HEROS: 25) (UNANNOUNCED: 5) (CAVORTS: 6) (WEIMARANER: 4) (WITHY: 8) (CEREMONIALISTS: 2) (JACKBOOT: 8) (JACKBOOT: 8) (DJELLABAH: 5) (POLYHYDROXY: 4) (STONES: 10) (COURTSIDE: 6) (MATRICULANT: 5) (DAREDEVILS: 6) (POLLINATORS: 6) (DISSIPATES: 3) (SOOTH: 5) (BUTTONS: 7) (PAROCHIAL: 4) (WELDORS: 8) (VUGHS: 25) (FLOCCOSE: 5) (UNITED: 9) (TERMINUSES: 5) (MISSPENDS: 6) (LAVALAVAS: 5) (INSTANCIES: 3) (REIMPRESSIONS: 3) (ASSOCIATIVITIES: 3) (MANGANIC: 8) (INFUSIBLE: 5) (EMPTIEST: 4) (APOTHECIAL: 4) (FLEXURES: 5) (CONDENSATION: 7) (GUMMATA: 4) (KIWIFRUIT: 5) (UNRULED: 25) (MARRIER: 10)"

I was expecting the two acc’s to be merged in the final result, but they aren’t. That’s why I’m missing half the word scores.

Final result:

" (GOODLIER: 7) (BEACHGOERS: 6) (TABOURER: 6) (TRACHEOSTOMY: 3) (PREFORMULATE: 3) (COCKNEYS: 8) (VEXILLA: 7) (SERIALIZATIONS: 8) (FORMABILITY: 7) (LAMELY: 5) (CANVASSER: 3) (STROLLER: 8) (UPBIND: 9) (GERMANIUM: 4) (JOLLIFICATION: 6) (EXTERNALIZE: 4) (TRUSTEESHIP: 2) (TYPISTS: 6) (PRUNERS: 6) (KILTED: 25) (INTEGRALLY: 5) (CANTILENA: 3) (LUCKIEST: 9) (BOLIVIANOS: 5) (ANTENNULAR: 5) (CRYPTOGAMOUS: 4) (CANVASSER: 3) (SYSTEMICS: 6) (ARGENTITES: 5) (PHOTOTYPESETTER: 2) (BIGGIE: 8) (MONOCLINE: 5) (OVERHUNTS: 8) (BURBLING: 12) (STAPEDECTOMY: 3) (FORMABILITY: 7) (BOLTROPES: 7) (GLADDING: 7) (CADENCES: 6) (THERMOSPHERES: 4) (STOREKEEPER: 3) (OUTHUMORS: 6) (NUCLEOPROTEINS: 2) (LOCATES: 7) (INTERABANGS: 4) (CHIVARI: 5) (MUTINOUSNESS: 5) (HOBBITS: 9) (BROOK: 9) (EMBROWN: 7) (UNRULED: 25) (BARBULE: 6) (ENWREATHE: 3) (INDIVISIBLES: 2) (DINOSAUR: 6) (ROUGHDRIES: 6) (UNENTHUSIASTIC: 3) (MISSORTED: 7) (ANTHRACITE: 3) (TETRAHEDRAL: 4) (ENLARGED: 6) (HEROS: 25) (UNANNOUNCED: 5) (CAVORTS: 6) (WEIMARANER: 4) (WITHY: 8) (CEREMONIALISTS: 2) (JACKBOOT: 8) (JACKBOOT: 8) (DJELLABAH: 5) (POLYHYDROXY: 4) (STONES: 10) (COURTSIDE: 6) (MATRICULANT: 5) (DAREDEVILS: 6) (POLLINATORS: 6) (DISSIPATES: 3) (SOOTH: 5) (BUTTONS: 7) (PAROCHIAL: 4) (WELDORS: 8) (VUGHS: 25) (FLOCCOSE: 5) (UNITED: 9) (TERMINUSES: 5) (MISSPENDS: 6) (LAVALAVAS: 5) (INSTANCIES: 3) (REIMPRESSIONS: 3) (ASSOCIATIVITIES: 3) (MANGANIC: 8) (INFUSIBLE: 5) (EMPTIEST: 4) (APOTHECIAL: 4) (FLEXURES: 5) (CONDENSATION: 7) (GUMMATA: 4) (KIWIFRUIT: 5) (UNRULED: 25) (MARRIER: 10)"

Can you please gist the output and link instead of pasting it all here? It’s making the thread increasingly difficult to navigate.

Thanks - check it now. Hopefully that helps

So I’ve figured it out. Instead of injecting the flow into a list I was injecting it into a map which hasn’t worked. It wasn’t clear in the docs that this was a requirement.

A) |> Enum.into([])

B) |> Enum.into(%{})

Given 40 secret tokens for the list version A) gives the results of all the words

result is [{“typhoon”, " (TACKY: 9) (UNBREECHED: 3) (MESENCEPHALON: 4) (DEMORALIZATIONS: 5) (IMPAINT: 4) (LOREAL: 8) (DEGRADE: 6) (SPLENDIDNESS: 3) (LAIRED: 8) (MOSASAUR: 5) (TALCKING: 7) (WINDSOCK: 6) (SWORDSMANSHIP: 5) (WIMBLING: 25) (LUMBERYARD: 6) (DINNERWARE: 5) (FRONTCOURT: 6) (TACKY: 9) (CHOLANGIOGRAMS: 4) (COPULATING: 7)“}, {“typhoon”, " (BARATHEA: 5) (SIGNALISED: 4) (LETTING: 10) (MORTISERS: 5) (RADIANCIES: 5) (ABLAUT: 6) (CALLS: 25) (CLUNG: 25) (PHILANDERERS: 3) (NONCONCLUSION: 4) (BIFACIALLY: 4) (SEAGOING: 9) (LEUKEMIC: 7) (AROYNTED: 8) (PETNAPPING: 4) (RESCINDED: 6) (WRIGGLER: 9) (PINNING: 8) (TOASTMISTRESSES: 3) (MOSASAUR: 5)”}]

Given 40 (different) secret tokens for the map version B) gives (20 word scores)

result is %{“typhoon” => " (LITHOGRAPHIES: 5) (PERIDOTIC: 4) (EXPULSED: 5) (MUSTY: 25) (WHEEPED: 8) (NONHEMOLYTIC: 3) (RUTILES: 9) (LANDAUS: 5) (REACCUSING: 8) (GRANDSTANDS: 5) (FACILITATIONS: 9) (SURLIEST: 7) (UNRECOGNIZED: 4) (TAXICAB: 7) (HALACHA: 5) (EXTERMINATED: 3) (LIQUIDAMBAR: 4) (TITHINGS: 7) (COELOMATE: 4) (SPINIFEXES: 5)"}

Cheers