I’ve created a small library that (mis)uses module (re)compilation to create very fast runtime storage. It performs extremely well for reads and extremely badly for writes, by design. The API is a simple key-value interface. It should be considered as being in a very early state, it’s not battle-tested at all at this point.
I’m aware of :persistent_term it’s quite similar of course. The difference is that persistent term is global, and you have to namespace your items by using tuple keys. Here you can have one or more module stores with just your stuff, which is nice. Your code will read MySpecialStore.get(... and it’s absolutely the fastest possible thing to have, which is also nice. Apart from writes of course
To be clear modules are also global. If you want a similarly ergonomic API you could just write a wrapper function around the persistent term lookup. As someone’s done a lot with compiled modules in Absinthe persistent term is just so much better.
Thanks for the feedback and taking an interest can you elaborate on the problems you encountered? Do you have reason to believe this approach won’t work or cause problems under real world conditions? I know you can’t store everything this way, like refs.
I’m aware modules are global too. But we still consider MyAppWeb.Endpoint to be my endpoint, as opposed to Phoenix.Endpoint, and it has its own config. So I was thinking among those lines.
:persistent_term was added in response to people creating modules at runtime to store data. It explicitly is an api to replace what you’re doing with a native api, which gives the otp team a bit of control over the usecase over abusing unrelated apis around modules.
Namespacing shouldn’t really prevent you from using :persistent_term either.
Ah I hadn’t realized that. I thought I was doing something similar but with slightly different runtime characteristics. I figured it would be a nice little self-contained thing to contribute back, just trying to help out. It seems I’m reinventing the wheel in a way that is less round then the original so it’s probably best to pull the lib
Then again, I the benchmark results do return some different results for compiled module vs persistent_term, especially with regards to the standard deviation (if the difference is significant is another question of course). That’s why I figured there might be a case for it.
I guess my question is: will this go horribly wrong in some way that I’m missing? If so, I’m really not to proud to pull the lib. If not, then why can’t this be out there as an alternative, apart from the fact that this is not what the compiler is meant to do?
If we’re discussing abusing features of the language have you benchmarked against using the process dictionary? Using it for configs is specifically called out as a possible appropriate use case in this post:
Write-once process parameters . Think of this as an initial configuration step. Stuff all the settings that will never change within a process in the dictionary. From a programming point of view they’re just like constants, so no worries about side effects.
Yes I know of that, benchmarked it too, but for some reason that performs quite poorly right now in comparison with ModuleStore and :persistent_term, maybe there’s been some regression in combination with OTP/Elixir upgrades somewhere. Also it’s unmaintained and the compiler coughs up warnings about it
Anyway, thank you all for all of the feedback. I think I’ll leave the lib out there and people can decide for themselves. If I become aware of practical issues with ModuleStore, I will re-evaluate!
One thing striking in your benchmarks is how poor Application.get_env performs. It’s got me thinking I should could/should put some :persistent_term.put’s in my config files for often accessed config blocks. I don’t think readability suffers.
If anything, I think :persistent_term match module access. I wonder if the variability, is the jit or something.
While I agree that a lot of the uses of app config do seem like good candidates for persistent term, I do want to note that the benchmark is still showing tens of millions of operations per second. “Poor” isn’t really the way to look at it here.
It is curious to see the module doing about twice as fast though. Absinthe had the opposite experience BUT it had very large values that were very tricky to compile into a module.
The benchmark file is included in the source under bench, maybe I mucked it up somehow? I deliberately tried to keep it simple though. 100 pairs for every type of storage, take only the 50th atom key as quickly as you can.