I wrote a new post detailing how to implement parallel comprehension using macros. Would love any comments and improvements!
Ooh! This is a lot of fun!
Awesome idea!
Yes it was! And very mind-bending in a very good way.
Probably would be good to give the para an argument to specify the number of processes to run in, defaulting to the cpu count (*2?) or so? As it is it is going to create a lot of processes, and will probably cause a speed hit in almost every case as the do:
body is generally very simple (like {a, b, c}
in your example) and all the work happens in the iteration and filters?
Very nice article though, shows how macros work and how to use them.
Thanks! You are right, it would be good to have an upperbound on the number of processes. Another thing is that it doesn’t support :into
. But that should be an easy fix.
Hello,
I’d like to ask point something in the code in the post…
In the macro, you use me = self
but it is called inside macro’s body, not quote do...
block. It will be evaluated at compile-time. In order to mitigate this, it would need to be updated like this:
# 7. Collect the results
quote do
# here
var!(me) = self()
unquote(pids)
|> Enum.map(fn pid ->
receive do
{^pid, result} ->
result
end
end)
end
and then used inside alternate for do
block:
# 3. Wrap the do block around a spawn. Send the result to the
# current process.
spawn_do_block = quote do
# here
spawn(fn -> send(var!(me), {self, unquote(do_block)}) end)
end
When I tried to use your approach in a project that is being compiled it indeed showed the difference. Your code evaludated PID of process that was compiling the module rather than getting the PID of the module that was executing the code in the runtime… I observed that this works when the module is pasted into iex
is this was also the process to compile it (I suppose) but in the actual project, the compiler process and runtime process differ…
PS: You could also exclude the use of var!/1
but you would need to call self
before spawn
for each iteration which might bring unnecessary calls (I suppose it is not that much of an overhead… but still…
I know it is an old thread but it is still searchable in google, so I thought it might be good to point this out…
Anyway, nice example of Macro.prewalk
…