Anybody implemented http caching when serving a rest API or graphql?

I am curious if there are any plugs or libraries that help implement http cache headers (REST API Response Caching - When and Where?) for a rest API or graphql server?

In my practice we start without any cache whatsoever and only if needed added cache on the resource loading level (DB or API requests). Often using Cachex.
Another would be using CDN to cache assets (but that’s outside of the elixir app, obviously).

The rest of the stack of phoenix application is pretty performant.

I am curious about http caching because I assume it is simpler and more efficient than caching with cachex inside of the server.

I only ever heard good things about Varnish, Squid, nginx, Caddy and several others.

But IMO the main challenge is having the right caching key; this is why in-app caching is so powerful because you can cache an object through its ID and have any updating logic update the cached object, and any readers will be none the wiser because they’ll fetch it from cache 99% of the time (the other 1% being the object expiring due to TTL policy, or a race condition).

It’s trivially easy to setup many web servers to cache pretty much anything but then it falls on you to always return the right modified / etag response headers. Which is also pretty easy.

All in all, it is a “pick your poison” type of stuff really.

1 Like

It’s trivially easy to setup many web servers to cache pretty much anything but then it falls on you to always return the right modified / etag response headers. Which is also pretty easy.

I’ll take this as it is so easy that no one has bothered to implement libraries or utilities to facilitate this, just use http header functions in phoenix.

I have no doubt some libraries do this but yes, it’s just 1-3 lines of code so a library wouldn’t bring any value to the table. Me and my previous teams wrangled HTTP headers through Plugs or, very often, through copy-pasted code, many many times.

Plus:

  • you can test it without running an extra service
  • you can use libs to form BEAM clusters, sharing cache (or broadcast cache updates)
  • one less single point of failure

But before you go the extra cache route: make sure you need it!

  • most databases already have build-in (very powerful) caching. Their default settings are always on the safe side memory wise and mot always adjusted to the amount of cheap memory available in machines today. As a result they cache a lot less than your machine can handle. So before anything else: learn how to tweak the database settings.
  • even simple caching layers add code and complexity.
  • have seen many projects with fancy (but complex) caching layers only to have 100 requests…per hour.
2 Likes

I have actually :sweat_smile:, released these libraries a few months ago.

For now only with a memory backend, WIP for the disk backend.

I released it because of a not so great experience with Varnish. As @BartOtten put it it brings:

plus many other things, like programmatically accessing the cache in addition to the implemented libs (eg., you can cache chunks of a list response so that individual objects are cache as well). It integrates well with the existing tools we use (telemetry for instance).

1 Like

To clarify, I am only looking for setting http headers to enable client to cache.

Misread, sorry :sweat:

You could be interested in these libs:

2 Likes