To explain what I mean consider the following code:
Task.async(fn ->
for _ <- 0..5 do
:timer.sleep(100)
IO.write "."
end
end)
IO.gets("Waiting for input? ")
The output it produces is looking like this:
Waiting for input? .Waiting for input? .Waiting for input? .Waiting for input? .Waiting for input? .Waiting for input? .Waiting for input?
I want it to look like this:
.....Waiting for input?
or like this
Waiting for input?
.....
Of course in that case I can use Task.await but in real case of async tests in ExUnit I don’t know what processes are currently run and what output they produce.
Is there any way to suppress IO output or stop IO processes while
any of the process is waiting for user input?
defmodule InputWaiter do
def waiter(receiver) do
IO.puts("Waiting for input?")
rv = IO.gets("")
send receiver, rv
end
def receiver() do
receive do
x -> IO.puts("User said: #{x}")
after
100 ->
IO.write(".")
receiver()
end
end
def userMessage() do
pid = self()
spawn(fn -> InputWaiter.waiter(pid) end)
receiver()
end
end
InputWaiter.userMessage()
p.s. I like your idea in that library … would be fantastic to pair it with property based testing… is this what you’re thinking?
How “unknown”? Do you have its PID? Can it send a message to another process? Because you could always name your process that is producing the wait-status UI and have that “unknown” process send a message to that process when it should stop. You could also have these unknown processes wait on an IO provider of some sort which does the IO.gets() on their behalf with this setup … but I’m not sure what you’d do with some random process you don’t start, don’t know about, don’t have control over its code … though I don’t think that is your case?
Yes, that is evident from the code and, as you note, I can imagine that is quite helpful! Paired with property based testing, it could help fill in the “hole” of testing functions that produce a variety of values (mappings, if you will) for which you need to then capture the correct outputs for. This is especially so for adding in failure cases that generated tests catch. Can see real potential for the removal of hand-work there!