Memory Consumption in Requests

I have a question about memory consumption when receiving a frontend request, for example.

In this first example, let’s suppose that the user created an order

"order": {
  "order_code": "123",
  "items": [
    {
      name: "item name 1",
      quantity: 1,
      price: 100.0
    },
    {
      name: "item name 2",
      quantity: 2,
      price: 100.0
    },
    {
      name: "item name 3",
      quantity: 3,
      price: 100.0
    }
  ]
}

And each item should have the same “order_code” too, for some logic business.
In the backend, we should map each item and copy the order_code inside it.

In this second example, the frontend is responsible for sending the order_code for each item and the backend should validate if the order_code in each item is equal to the order’s order_code.
(The customer don’t need to see this field as long as the frontend send as a “hidden” input, for example)

"order": {
  "order_code": "123",
  "items": [
    {
      name: "item name 1",
      quantity: 1,
      price: 100.0,
      order_code: "123"
    },
    {
      name: "item name 2",
      quantity: 2,
      price: 100.0,
      order_code: "123"
    },
    {
      name: "item name 3",
      quantity: 3,
      price: 100.0,
      order_code: "123"
    }
  ]
}

Creating some test using Grafana and Prometheus, I had some results where the request duration in milliseconds was higher in the first example than the second.

But, using erlang.memory(:total) the second example was consuming more memory than the first example for some reason.

I thought that in the first example the memory consumption would be greater, as another data is created in memory for the new items with order_code.

How does memory consumption work in this case?
Is it advisable to do this type of mapping in Backend or should we just validate the data?

The difference here is so small that testing methodology is going to matter a lot unless you’re taking about a very large number of records per request. Total memory in particular is going to be a very noisy way of measuring the impact of your JSON serialization.

Can you say more about your use case and benchmarking approach? Have you tried micro benchmarking with benchee for a more focused comparison?

2 Likes

My use case is something like this example. But, the real case is, I receive a bill and this bill has your own fields and some fields to be copied inside the installments.

bill: %{
    fields,
    installments: [%{amount: 100}, %{amount: 100}, %{amount: 1000}]
}

From Bill I have to copy 5 fields to inside each installment object and another 12 fields I should copy if the installment object does not has these fields filled.

I could receive 20 installments, or maybe 50… and this proccess could take a long request time.
I would have to map each installment, copy the fields and verify if I should copy some fields from bill or not.

It really shouldn’t. At the “dozens scale these operations should take microseconds not milliseconds and be nearly impossible to measure. Can you show a reproducible example of what you’re talking about?

1 Like