Is plug_logger_json async?

I am using for request-response logging in my elixir application. Everything works fine, and I can see my logs in my log file.
I just want to know if the request-response logging synchronous or asynchronous.
If it is synchronous, wouldn’t it affect the response time of all the APIs? I am trying this on my local environment, and I am not able to figure out.
Can someone please help? And whether it is advisable to use this logging mechanism in prod environment?
Thanks in advance.

If they use logger application that is part of elixir, then logging is just sending a message to another process which then is responsible to print it or store in a file or send it to elastic or whatever…

1 Like

Yes, they are using :logger

my config looks like this -

config :logger, :console,
format: “$time $metadata[$level] $message\n”,
metadata: [:user_id]

config :logger,
format: “$message\n”,
backends: [{LoggerFileBackend, :log_file}, :console]

config :logger, :log_file,
format: “$message\n”,
metadata: [:user_id],
path: “logs/my_pipeline.log”

config :plug_logger_json,
filtered_keys: [“password”, “authorization”],
suppressed_keys: [“api_version”, “log_type”]

Does this imply it’s aync?
I am a beginner in elixir, sorry if I am asking some stupid question.

What do you mean by async?

If you mean that your process can continue before the message has been fully handled, yes.

1 Like

I am using the plug_logger_json as a plug in endpoint.ex to log the API request in logs/my_pipeline.log file. By async I mean, does the application waits until the log is written in the log file and then sends the API response back to the client or it sends the response irrespective of the fact whether it is logged in the log file or not?

I think I answered this twice. Yes, your process only needs to wait until the log message has been sent to the process that actually handles it and writes to disk/stdout/whatever.


Thanks, helps resolving my doubt.

1 Like

Just adding a detail, the Logger will switch from async to sync mode under some circumstances. It will go into synchronous mode to apply backpressure if it is struggling to keep up. That means logging can slow down an application if the load is high enough. The threshold to switch is by default 20 but can be configured with :sync_threshold. See the available options here


Hey thanks @jola for the response and it’s been really insightful, just a quick question on sync_threshold, what should I keep the threshold for prod?
Have googled and came to know that people have been putting 350 in their production. What according to you is a good number to be put on prod?
Will be really thankful for this help.

1 Like

The number mostly matters if you expect sudden but short spikes of logs. If your system is under constant load higher than the logger can handle, increasing the number doesn’t really matter.

Basically, the number is how many logs you can accept losing if your system goes down. If you are okay with losing the up to 350 last log messages, then that’s the right number for you. This means your system will not be slowed down by logger until you have a backlog of 350 messages. Great for short spikes of log messages. The larger the number, the bigger spike you can handle without logger slowing you down, but you also risk losing more messages. I can’t tell you how to do this. The safe option is to keep the number low.

If you’re tweaking this, also note that there’s another options: :discard_threshold, which is 500 by default. It is the limit at which the logger starts throwing away messages because it can’t keep up.


@jola @NobbZ Thank you for the help :slight_smile:

1 Like