How do I limit the size of an ETS table to avoid a memory crash!?

Welcome to the forum! You pose a interesting question where many solutions are viable. You already thought about some, so lets dissect them a little.

You could have a GenServer check your ETS table every minute. Afraid this GenServer that clears the table may die? No worries, have Supervisor for it! Simplicity at its finest.

Worried one minute may be too much? You can use select_count and use an inverse exponential backoff algorithm for the pooling time (the bigger the table, the quicker you check).

You can use the select_count I mentioned before and it will be reasonably fast, while still being atomic. you can have each operation do an insert and then and update_counter but then you won’t have atomicity.

If using select_count is too slow, imagine having to write in two different tables! No way it’s gonna be faster and you still have to sync them! Is sync really important to you? Or is it OK to be a few values away from reality?

I don’t recommend DETS overall. Last year we had to remove DETS from all our systems because we were such under heavy load that our DETS tables were saving corrupt data. They just couldn’t keep up. We moved to ETS and we never had a problem again.

Furthermore, caches are supposed to be fast, and IO access to disk is by far one of the slowest things you can ask your machine to do, together with network requests. So I wouldn’t advise it.


@outlog cachex looks really cool!


Alternately a few weeks ago I posted a similar issue. I don’t check for ETS size, instead I check for the machines available RAM using memsup. You can see the original topic here:

Hope it helps!

4 Likes