What are the caching strategies in Phoenix?

In django there is a cache framework backed by memcached. Rails also puts a lot of emphasis on caching, and even the idea of russian-doll chaching comes from Rails community.
Elixir/Phoenix programmers often say that we don’t need caching at all in maximum of cases, because BEAM is faster than Python or Ruby. But there might be some point where we’d need to cache things. Where is that point? And when we reach the point when we need caching, what are the caching strategies and mechanisms then?

Thank You!

1 Like

I have a set of resources connected to users through a memberships table. Each membership has a role. To authorize access to a resource, I need to know the user memberhip’s role on every request. So I put a cache before some Repo calls to cache recently fetched memberships (with 5 minute ttl). I use ets tables for that.

2 Likes


Have not had to use it yet but considering it’s by @sasajuric it should be top notch

1 Like

or

not quite sure of the actual differences between con_cache and cachex

another caching pattern would be to have a genserver or agent hold the state… but that can get more complex than necessary…

1 Like

Using a GenServer or Agent for caching purposes is probably not a good idea. It will likely become a bottleneck. ETS tables, where multiple processes can read from the table asynchronously, would be a better solution.

2 Likes

Depends entirely on how many and what things need to consult the cache. I think it’s an exaggeration to say that in the general case it will become a bottleneck. The obvious counter-example are specialized caches that would serve a subset or even a single user.

Caching is not “This is the right way and that’s how you should do it”. The reason we shouldn’t use these canned solutions (“I’ll just install X, that will solve everything”) is because we can think about what we’re trying to do and make a customized solution that will cover everything we need and not more. That doesn’t involve just doing one thing every time or installing some package that we use to cache everything.

1 Like

The main reason that you hear that is because of how efficient templating is with Phoenix. In Rails for example, when you break down the response time on a page a HUUUUUGE portion of it is spent rendering the view layer.

The other issue is the cost per response for the web server. When you’re working with a server that can’t handle a high volume of concurrent connections, you create a situation where you have to focus on getting in and out as fast as possible.

Elixir and Phoenix address both of these. Cost per connection is almost nothing because of the BEAM and the view layer is compiled into IO Lists.

The short explanation IO Lists of why that is that all of the strings that make up your template code are immutable values in a linked list. Rather than constructing a full string based response, this list is sent back to the socket and iterated over directly. The full page response doesn’t ever exist on the server because the pieces are sent individually to the socket, meaning all of the memory allocation from creating and destroying each part of the template on every request never happens.

Here’s the long explanation though: https://www.bignerdranch.com/blog/elixir-and-io-lists-part-2-io-lists-in-phoenix/

For the volume of connections, even if you have a particularly slow database request, since the server can handle so many connections that one slow page isn’t going to degrade the experience of the others. If that particular page was dealing with a high volume of requests itself, it would be another story though.

The combination of these two means that the qualities that might normally cause a slow down that make you reach for different styles of caching are less necessary. The more you’re able to avoid caching the more you’re able to avoid cache invalidation, which gets to be more complicated as it grows.

If you do need caching, CacheEx is a very solid library that will let you wrap ETS and as a perk if you make 50 requests at the same time for the same piece of un-cached data, it will only make the request once and send the response back to all 50 requestors when it’s ready.

5 Likes

While I agree with your overall message, I am going to have to disagree in the context of this thread (at least for my interpretation of this thread). When someone comes and asks about caching strategies, my thought is that they are not looking for a specialised caching mechanisim that may only apply to their use case. My thought is that they are just looking for some kind of “plug and play” caching mechanisim that may be good enough for now to continue with their project (which is actually what they linked to for other languages and frameworks).

If, however, someone came in and said that the current libraries do not work for their case because of reasons X, Y and Z, and spelt out what their actual requirements were. Then yes, a GenServer or Agent could be a useful caching mechanisim depending on their requirements.

1 Like

They can certainly become a bottleneck compared to ETS… just going of the caching tangent…

btw, seems like Saša recommends checking out cachex first:

5 Likes

They link to that because it’s considerably harder to write a specialized cache with proper behavior in other languages. Someone new to the BEAM will always assume that you need a special library for most of those kinds of things because it’s not usually the kind of thing you build on your own.

Edit: If nothing else, implementing a cache tailored to your needs is a good exercise in order to get familiar with the BEAM. With that mindset, at least, I think the suggestion to install something and follow the Readme is pretty unhelpful overall.

Indeed. Cachex is much more actively maintained as well as more feature rich, so that’s what I’d recommend first.

ConCache happened organically. I needed it for production with exactly those features, and it is a third incarnation of the caching library (the first two were close sourced), applying the lessons learned from the previous attempts. It behaved very well for that scenario. I didn’t have a need for such caching since that project (which I left 4 years ago), so due to the lack of real needs and the lack of spare time, ConCache hasn’t progressed since then, save for an occasional contribution.

ConCache is currently not abandoned, although I wonder whether we need two caching solutions in the Elixir community. At first glance, it seems that there’s a lot of overlap between these two libraries. When I find the time I’ll study cachex in more details, and see if it makes sense to decommission ConCache in favour of cachex. In case anyone else has already compared the two, I’d be happy to hear your thoughts.

14 Likes