This is not exactly that. The unification there is meant that when you call :logger.set_process_metadata/1
it will also affect Logger.metadata/0
return value, so utilities like OpenCensus do not need separate wrapper to handle both logging libraries. It isn’t at all about structured logging as this would be a breaking change for formatter and/or backends.
This is the whole point. The metadata should be only “additional data” that give you context about log entry, it should not contain log data like request duration, as this belongs to the log entry itself.
Example of difference between structured logging and “classical” approach:
In Plug.Logger
right now we have (on abstract level):
duration = measure_duration(call_next_plugs())
Logger.info("Respond #{conn.resp_code} on #{conn.path} in #{duration}", request_id: req_id)
While structured log will be like:
duration = measure_duration(call_next_plugs())
Logger.info(%{resp_code: conn.resp_code, req_path: conn.path, req_duration: duration}, request_id: req_id)
So you see that log entry contain “core” values of the log and metadata is for, well, metadata.
Not much to be honest. Both of these are just dispatch of some data to different backends. The differences are even less distinct in case of Erlang’s logger
and telemetry
. The only difference in fact is that logger
also pass the data through formatter
.
All telemetry
, Logger
, and logger
are basically message queues.
The main problem is that with logger
replacing telemetry
and broader usage of structured logging it will be easier to implement white-box telemetry in BEAM core rather than adding new library to default release. Especially as logger
is in kernel
application so it is always up and running in all Erlang applications running OTP 21+ as this is application that is (almost?) impossible to not run.