Jason: a blazing fast JSON parser and generator in pure Elixir

Hello everybody.

I have just released Jason - a new JSON library.

You might be wondering, why do we need a new library? The primary focus of Jason is speed. And it it fast, really fast, usually twice as fast as Poison on both decoding and encoding and much closer to the performance of jiffy (which is implemented in C) - in some situations even faster.

Another goal was to retain maximal compatibility with Poison, so it could be a drop-in replacement as much as possible. The only places where compatibility was broken were done for speed and in minor features. The documentation outlines those differences, how to remedy them, and how to use Jason with Phoenix, Plug, Ecto, Postgrex and Absinthe.

Both parser and generator fully conform to RFC 8259 and ECMA 404 standards. The parser is tested using JSONTestSuite and some property tests with the wonderful StreamData library.

Source: GitHub - michalmuskala/jason: A blazing fast JSON parser and generator in pure Elixir.
Docs: https://hexdocs.pm/jason
Benchmarks: decode-20-hipe.txt · GitHub, http://michal.muskala.eu/jason/decode.html and http://michal.muskala.eu/jason/encode.html

PS. I’m planning on writing a blog post about how I made it this fast :slightly_smiling_face:


Nice! Idea in #9 issue (jason_native) is also really interesting and well planned. Project starred and watching. Waiting for final release. :smiley:


Great news! Happy to have a faster option that we know will play nice on the BEAM.


Wehoooo thabks Michal! Honestly, in Elixir land we’re in need for a new JSON library imo but not only for performance reasons… Although that helps as “migration bait”.

Looking forward to have a look at it and the benchmarks :grinning:

Thanks for all your great work in the eco system!

1 Like

Great lib, thanks! Here is suggestion for the lib logo :slight_smile: Jason


Need to have the symbols on his mask in the shape of a }


After over 1.5k downloads from hex and no complaints about any major bugs or issues, I’ve decided to release version 1.0. There were no breaking changes since the rc releases. If you were considering the stability or production readiness of the library to give it a try, now it’s a good time to do it :slight_smile:.

Current plans for 1.1 include support for pretty printing. Since there’s a pull request already opened, I expect it to land pretty soon.


Congrats @michalmuskala!

I’m very intrigued by the HIPE numbers. As I understand it, there is some non trivial overhead introduced when switching between HIPE and non HIPE code. Are you able to offer guidance with respect to when HIPE enabled Jason is worth it?

Based on my exceedingly limited understanding it would seem worth it when the total JSON payload to be encoded / decoded is frequently above some certain size. Is there a good way of figuring out what that size is?

There are two problems with HiPE. One is that switching between HiPE and BEAM is expensive. The other is that HiPE is generally much less tested and developed outside of the OTP team. Because of that there are sometimes stability problems and other issues. That’s primarily why many discourage use of HiPE. That said, when compiled with HiPE, all the property tests that we have in Jason pass without issues, so I’m as certain as I can be we hit no bugs.

Going back to the first problem with HiPE. As long as you’re staying within HiPE code, all is fine. Switching between BEAM and HiPE (so calling a module that was not compiled with HiPE from a HiPE-compiled module) requires shuffling registers and serialising some state - that’s why it’s expensive.

The design of Jason is such, that if you compile our main modules with HiPE, we should do a minimal amount of switches. This is especially true of the parser - it never switches out of the Jason.Decoder module, other than at the end with the final result. This is very efficient. The encoder calls the protocol for any non-standard data types, so the switch back-and-forth would occur there. I haven’t really played with it, but we could make sure that if the encoder is compiled with HiPE, all the derived protocol implementations are as well - this should mostly alleviate the issue.

I haven’t looked into making HiPE compilation an easy option right now, but it’s definitely on the table. We could go for something similar to Poison:

config :jason, native_decoder: true, native_encoder: true
# or a combined
config :jason, native: true

Hi Michal. Did you ever write this blog post? I am interested in reading the tricks to achieve such performance. If you didn’t, would you mind sharing the list of tips so I can know what to look for when reading the source code?
Thank you

1 Like

Btw, your personal website is down

Not exactly what you’re looking for, but this podcast seems to have him talk about some of these points: Michał Muskała on Ecto and jason – Elixir Internals | SmartLogic


reading the keypoints, this is really helpful. Thank you @yreuvekamp

  • Creating jason, the default JSON parser and the impetus behind it.
  • Understanding lexing and tokenizing; largely the same thing with different names.
  • Macros on jason and forcing the VM to use optimizations in particular cases.
  • Performance on jason; how Michal achieved the speeds he did.
1 Like