One thing I feel like I’m running into a bunch as I write more Elixir is trying to figure out how to handle dependent falliable operations. Consider the following example
with {:ok, file1} <- File.open("file1.txt"),
{:ok, file2} <- File.open("does_not_exist.txt") do
# ...
else
{:error, err} -> err
end
In this example, we can successfully open file1.txt, but not does_not_exist.txt. At this point, how do we handle this error?
Ideally, we would want to File.close(file1), but file1 is not in scope in the else block. This makes sense, because for all we know the first operation could have failed. However, we’re at risk of leaking file descriptors if we don’t close file1 upon failing to open file2 (FWIW: I’m not sure if somehow the standard library auto cleans up file-handles, but I’ve hit this with a few things, such as spawning multiple dependent processes. Regardless of whether or not it does, the question still holds, I think).
If I were writing some more procedural code like Go, I might do something along the lines of
Is there a more elegant way of dealing with this? Unfortunately, the only way I can think to deal with this is to use nested case statements, which can obviously become unwieldy
Let’s start with this. A key thing to recognize is that these file descriptors are tied to the lifetime of the process that opened them. If a process opens various files and then terminates, those files are all closed even if the process utterly fails to handle them directly.
From there you can see we have a path. In general if you have several files or resources you want to acquire in order to proceed, you’re almost best off doing so in a process that is very assertive that it can handle those resources, and chooses to crash if it does not get them. This cleans up everything nicely, and you can monitor the success or failure of that process externally.
So in this case, would you attempt to isolate the file opening into a Task, and then use Task.yield on it?
In my usecase, I’m actually using :timer.apply_interval but it wasn’t until you probed me to look that I realized that it is actually linked to the starting process (neat!)
Perhaps, it depends on what the life cycle of your existing process is. If it’s an HTTP request with cowboy, or a background job with Oban, then you’re already good! Those are not long lived processes, they only hang around long enough to answer their request or run their job. If it’s something longer lived the yeah, I’d consider throwing it in a task.
Interesting. That feels a bit cumbersome, as (if this were something long-running), we would inevitably need to spawn a task supervisor to handle the exits, as Task.async will link the task to our process. Certainly not impossible, though! I imagine this is just growing pains as I learn to think in processes, as it were.
It’s likely overkill for my specific usecase but this looks like almost exactly what I had in mind. Thank you for the pointer! Definitely seems useful in more complicated cases.
Sage is a good library for that, you can also add a variable that you update on each function in the with operation:
with state = Map.new(),
{:file1, _state, {:ok, file1}} <- {:file1, state, File.open("file1.txt")},
state = Map.put(state, :file1, file1),
{:file2, _state, {:ok, file2}} <- {:file2, state, File.open("does_not_exist.txt")} do
...
else
{:file1, _state, error} ->
error
{:file2, state, error} ->
file1 = Map.get(state, :file1)
File.close(file1)
error
end
Basically using a unique name (:file1 and :file2) to match on the else clause the exactly place where it failed during the with operation plus passing the updated state if you need to redo the side effects.