How to encode Decimal with Jason Library to float?

Hey guys,

I have a short and simple question, which I am currently unable to figure out using the documentation of the Jason library.

I have a struct which contains some keys with values of the Decimal struct. I would like to have the Decimal struct (I am using the Decimal Library for it) always converted to float when encoding to JSON. So far I only know that I would have to implement the Json.Encoder protocol and have the encode method to call Decimal.to_float(value). But I am not sure how to do that? How do I implement the protocol?

I have tried to just make a file like this:

defimpl Jason.Encoder, for: Decimal do
  def encode(struct, opts) do
    Jason.Encode.map(Decimal.to_float(struct), opts)
  end
end

However, this code never runs. Can anybod help? :slight_smile:

Thanks guys.

Please do not use floats. You will use all the precision you initially got by using decimal. Please do either use a String or dump into a JSON object.

Aside of that, you should use Jason.Encode.float/1. From what I can see in the snippet you posted, the code should crash rather than not being run. Can you therefore reproduce the problem in a fresh project and put it on github?

2 Likes

Thanks for your reply. Can you describe why using a floating point number in JSON is a bad idea? I am using Decimal internally to handle money values and want to return those to the frontend just for displaying in the view. Having the number as a string representation feels wrong to me. However, the default encoding encodes the value to a string. I am not sure where this logic comes from.

edit: Okay I found a pretty good explanation of why I should not use floating points on StackOverflow. I get it - but just for my personal knowledge, the question still remains - even I would not use it anymore.

I don’t think a demo project is needed. I just can’t figure out how to make the code run. I only created a file decimal_encoder.ex with the protocol implementation. But I need to use or register the implementation anywhere so that Jason knows it should use my decoder, don’t I? And this is where I am stuck.

Floating point numbers might have their right to exist in JSON, but definitively not when your datasource is a Decimal, as Decimal allows for very exact values, which float doesn’t.

This is good.

But to be honest, in JSON the only way to represent numbers in a safe way, as JSON does not differentiate between integers and floats, but knows only about numbers. So numbers that look integral might get coerced to float without you beeing able to control.

From jason:

So you can not change it.

And here we already have a problem. What is encoded as 1 Euro and 5 cent might get rendered as 1.05 in the JSON, but as this probably not an exact value in Float this might get loaded as 1.04999999 (or similar) and displayed as such. Or even worse, the client truncates or rounds, or does other things out of your control.

Your user has the right to get the same (or at least equivalent) information that you are working with, especially when its the users money!

4 Likes

Thank you so much for your explanation. This totally makes sense and I’ll stick with strings for representing money values.

I was looking at the Jason Source as I was expecting something lime this, but I must have missed it. Thanks for sharing this. :blush:

1 Like

I knew where to look :wink: I already assumed Jason where implementing for Decimal when I’ve seen that decimal is an optional dependency of it.

Also defimpls are usually in the same file as the struct or the protocol, and as Jason does only define the protocol it was clear where to look, as we can safely assume that @michalmuskala organizes his code idiomatically.

1 Like

Ack!!!



Among others!

Never ever ever ever ever use floats in relation to money, ever ever ever!

/me hopes they were clear enough ^.^;

2 Likes

I think you missed one „ever“. (I got it, I got it. :stuck_out_tongue:)

1 Like

Hmm, I thought that the real problem with floats was trying to operate with them, not displaying them per se.

I know the famous: 0.1 + 0.2 # == 0.30000000000000004, but if you are operating on Decimals in Elixir and do something like:

Decimal.add(Decimal.new("0.1"), Decimal.new("0.2"))

And then convert the result to a float, it’ll be present just right, as 0.3.

What I’m trying to get at, in the end I always have to use parseInt, parseFloat or Number on the frontend to convert those decimal strings into numbers, because you usually need those values for validations, limits, steps, etc.

Honest question:
Is there really a problem when going from a string representation to a float?

Is there an example of a decimal string representation that when converted to a float loses precision? Something like having "1.05" => converts to float => 1.049999?

There are enough, just take 0.3, you can not represent that value as float32, the same is probably true for float64, but you can at least have a closer approximation.

Play around with the float converter, it shows you what is possible and what is not.

https://www.h-schmidt.net/FloatConverter/IEEE754.html

Also, just as an exercise. Try to find n and m: 0.3 = m / (2 ** n)

Even though that is not the real representation, it will give you an estimate what the problem is.

3 Likes

Sure, here’s a concrete example:

Suppose you have a service where people want to send you numbers, and you’re going to do some kind of processing with those numbers, and then send back a result. Suppose it’s super simple, like a math test for kids where they’re asked to round a number.

The task is: Round “2.675” to the nearest hundredths place.

You take that value, and parse it to a float:

iex(7)> {value, _} = Float.parse("2.675")
{2.675, ""}

Yay! that looks right!

iex(8)> value |> Float.round(2)          
2.67

Oh no…

This happens because it isn’t actually 2.675, it’s 2.67499999999999982236431605997495353221893310546875 and Elixir is nice and displays something more friendly.

Yes, if you care about precision at all. To be clear, plenty of times you don’t care about precision. Maybe you’re doing geolocation stuff where precision is carried as a separate term anyway, maybe you’re doing something in a GPU with floating point math. But if people want you to treat the number as it is written, you can’t convert it to a float, at all.

4 Likes

To be fair: Rounding probably falls in the category of “operate on a float”, but rounding is certainly more common for “view” related logic than arithmetic.

I was recently looking into a php based money handling library using floats and round(float, precision) and was wondering how I could bring the point across that floats are not really a great for money, but couldn’t find a way to break it. Interestingly enough in php round does work like expected with floats: round(round(2.675, 2)) == float(2.68). Now I’m even more curious.

Oh, I see. Thanks for explanation.

So the real solution would be to also use a decimal library on the frontend?

Yeah if people care about precision, they should avoid floats at all layers and use decimal libraries at all layers.

2 Likes

It depends on whether the rounding happens on the binary data (which is what Elixir does in Float.round/2 or on the decimal digits (which I what I do in Cldr.Math.round/2). Perhaps PHP rounds using the decimal digits?

0.1, because there is no finite sum of powers of 2 that equals that

1 Like