Prevent SASL reports leaking sensitive data

When I have SASL reports enabled in my config for prod, i.e.

config :logger,
  level: :info,
  handle_sasl_reports: true,
  backends: [:console]

I can see all sort of sensitive data being logged, that, ideally, shouldn’t be logged. The log statements I see are usually start_link logs like this:


    Start Call: Goth.Config.start_link([json: "JSON WITH CREDENTIALS"])
    Start Call: Segment.start_link("MY SEGMENT API KEY")
    Start Call: DBConnection.Connection.start_link(Postgrex.Protocol, [pool_index: 40, types: Postgrex.DefaultTypes, hostname: "localhost", port: 5432, repo: DB.ReadReplica, telemetry_prefix: [:db, :read_replica], otp_app: :db, timeout: 15000, database: "MY DB", username: "MY USERNAME", password: "MY PASSWORD", socket_dir: "/tmp/path", pool_size: 40, pool: DBConnection.ConnectionPool], #PID<0.7904.0>, #Reference<0.3253924283.239206406.56959>) 
    Redix.start_link([host: "MYREDISHOST.gce.cloud.redislabs.com", port: 13485, database: 0, password: "MY REDIS PASSWORD", name: ExqUi.Redis.Client, socket_opts: []])

etc. etc.

Most of these logs are coming from libraries that we use, not from the code I directly wrote. In addition to that, some of the sensitive data is embedded in structs, some are plain Strings, or embedded in maps, so I think implementing custom inspect like this won’t cut it either: https://hexdocs.pm/elixir/master/Inspect.html#module-deriving

Is there something I am doing wrong, or is there some way to prevent the SASL logs from leaking such info?

The most reliably way would be to get those 3rd party packages to use one of the available techniques for preventing data leakage.

In fact, at least two of the libraries from your examples already support that:

  • In Redix you can pass an MFA tuple instead of a literal password, e.g. password: {System, :fetch_env!, ["REDIX_PASSWORD"]}
  • Postgrex will use the environment variable PGPASSWORD if the :password option is not set

Even if you use a different technique for fetching secrets (e.g. Vault) you can just call :os.putenv('PGPASSWORD', 'some_value') in your application initialization code, and Postgrex will pick it up.

2 Likes

Thanks, I respectfully disagree. I think this doesn’t solve the problem and has it’s own set if problems on top. Primarily, it doesn’t prevent the library internally passing the credentials to it’s child processes, which will be logged in sasl progress reports too.

I disabled SASL progress and crash reports for now as I don’t think there is a good solution that doesn’t involve me maintaining a list of patterns to match log lines against and filter out stuff this way. Doesn’t seem like something worth having.

Ideally, I’d have SASL progress reports on but without logging full arguments, but I don’t think this is possible without messing around in Erlang.

In theory you could use custom filter, but it will work only for secrets that you are aware of. There is no real way to solve this problem in 100% cases, as there is always possibility that the raw value will be passed to the other functions/processes so it will end leaked.