Interesting. One thing in the ssl application that caught my eye was the tls_record:encode_data
function. Consider the following:
> body = String.duplicate("-", 10 * 1024 * 1024)
> :erlang.memory
[
total: 44051952,
processes: 5186904,
processes_used: 5186032,
system: 38865048,
atom: 336049,
atom_used: 327632,
binary: 10548400,
code: 8012598,
ets: 454184
]
> :tls_record.encode_data([body], {3, 0}, %{:current_write => %{:beast_mitigation => :one_n_minus_one, :max_fragment_length => :undefined, :security_parameters => :ssl_record.initial_security_params(1)}})
...
> :erlang.memory
[
total: 46457456,
processes: 4910424,
processes_used: 4909512,
system: 41547032,
atom: 442553,
atom_used: 418066,
binary: 11842480,
code: 8318166,
ets: 466312
]
That seems to create a 2.5 MB spike even though the function doesn’t complete successfully. Now when applying the chunking, the delta doesn’t occur when calling that function (although the total memory seems generally higher, ~50MB instead of ~44MB in my shell). I also called the :tls_record.encode_data
function without wrapping the chunked binary in a list. Not scientific but could be a good place to dig in further. Changing beast_mitigation
to disabled
didn’t seem to help, either.