Separate file log for each Task

My Task runs some long and heavy computations, including downloading tarball, unpacking it, calling some external programs and so on. I’d like to log this process and Logger with logger_file_backend seems like a nice fit.
In Task “body” I have following code:

Logger.add_backend({LoggerFileBackend, id}) # id is string
Logger.configure_backend({LoggerFileBackend, id}, 
  path: Path.join(dir, "output/run.log"),
  level: :info,
  metadata_filter: [task: id]
)

# actual work...

Logger.remove_backend({LoggerFileBackend, id}) 

On compilation I get this warning:

warning: passing non-atom as application env key is deprecated, got: "1710.07035"
  (elixir 1.11.3) lib/application.ex:621: Application.get_env/3
  (logger_file_backend 0.0.11) lib/logger_file_backend.ex:175: LoggerFileBackend.configure/3
  (logger_file_backend 0.0.11) lib/logger_file_backend.ex:21: LoggerFileBackend.init/1
  (stdlib 3.15) gen_event.erl:523: :gen_event.server_add_handler/4
  (stdlib 3.15) gen_event.erl:369: :gen_event.handle_msg/6
  (stdlib 3.15) proc_lib.erl:226: :proc_lib.init_p_do_apply/3

So internal implementation uses :gen_event which demands an atom; problem is that there’re potentially infinite number of ids. Actually using atoms is not a big deal since I won’t be running million tasks at once, but still it feels kinda hacky. Are there other avenues to capture logs to file?

Seems like weird approach - why not just store the logs for whole application in single file and just pass different metadata for different tasks.

Anyway - the best solution for you here would be to write custom backend that would wrap over the LoggerFileBackend. You could even make that in a way, that would dynamically dispatch the logs to different file depending on the metadata.

I’d like to show logs on a webpage in case task failed, and greping through one chunky file (with potential of race conditions) is no fun.
Thanks for the suggestion, I’ll look into it.

I think this could be better approach: since quantity of error types are finite(f.e bad answer from foreign server, or bad function arguments etc, idk your exact errors) it’s better to handle them not to be errors.
In my opinion, log is used only for unexpected errors, not for logic flow. If you expect a possible error - don’t ignore it, do with it something. In your case i would handle them(quantity of error types are sure finite) and if, in my opinion, it looks like failure - record it to db, for example. Don’t flood error logs.

P.S: And i think task is CAN actually raise an error not because of logic error only if it comes from outside resources you are not controlling. It could be some hardware error, ofc, but it’s still unexpected and doesn’t requre logger customization.