Response time from Genserver vs hitting a db

Hi, I’m in the process of cleaning up a really messy multi-tennant app, and one of the things I’m doing is making a control panel application that all our application clusters will talk to for getting customer specific (ie site specific) config info and service discover info. This means the web servers will be hitting the control panel on every request, so that should be super quick and able to handle as much traffic as we need. The response however will be very small and simple (what’s your db, what modules are you using, are you in maintenance, etc). I benchmarked doing the control panel db in elixir hitting a materialized view in mysql for data and got great results, way faster than just doing it in Python (8 times more load actually). I’m wondering if even better results would be achieved by cutting out the DB and having the control panel app talk to genservers for each customer install. Wondering if anyone has experience on what kind of throughput and latency we get from messages to and from genservers versus hitting a database. (I could see per customer genservers being useful in other ways too).

Thanks!
iain

You should be able to get a significant speed-up using gen_servers (Then just a MySQL call without any writes is pretty fast regardless but it still has the overhead of going over the network and over a process pool which can be a bottleneck).

In terms of performance, the limitation of gen_server is concurrency. However, if you only have a small amount of data it should be able to hand quite a bit. Otherwise, I’d recommend ets which is perfect for this kind of job. Using ets or gen_server will give you a very quick response overhead in comparison to MySQL. We are talking 10-20 microseconds per request.

As an exercise I’d recommend that you create a Module with a backend-agnostic API. Then put your MySQL fetching behind it. Then add a gen_server and ets implementation and try it out. It should be pretty straight-forward to add a GenServer or an ets table as a cache in front of your MySQL server. If you need full persistent you can use mnesia or dets as a test as well.

Make sure you test with the concurrency you are expecting.

2 Likes

Sounds like you should use ETS as a read-through cache. Read the data from the db and put it in ETS. ETS is very fast, typically less than 1 microsecond response time.

GenServers are useful when you need to maintain persistent state. For example, if you have a chat app, you can create one server for each connected user. That’s a good place to cache information about the connected user, e.g. you can look up their name from a database when they connect and keep it in the connection.

Using GenServers when they are not a natural part of your application concurrency can cause problems: https://www.cogini.com/blog/avoiding-genserver-bottlenecks/

3 Likes

From my point of view use need both: GenServer and ets. I will provide some examples about how I implement the cache for database:

defmodule Database.Cache do
  use GenServer
  require Logger

  @cache :cache_database
  @timer 60_000

  def init(_args) do
    @cache = :ets.new(@cache, [:set, :public, :named_table])
    Process.send_after(self(), :clenup, @timer)

    {:ok, %{}}
  end

  def start_link do
    GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
  end


  def get(cache_key, fun, ttl) do
    case :ets.lookup(@cache, cache_key) do
      [{_, result} | _] -> result
      [] -> store(cache_key, fun.(), ttl)
    end
  end

  def store(cache_key, data, ttl) do
    :ets.insert(@cache, {cache_key, data})
    expiration = :os.system_time(:seconds) + ttl
    Process.send(__MODULE__, {:ttl, cache_key, expiration}, [:noconnect])

    data
  end

  def handle_info(:clenup, state) do
    state = Enum.filter(state, fn {cache_key, expire} ->
      if (expire < :os.system_time(:seconds)) do
        Logger.debug "Delete cache_key:#{cache_key} with expire:#{expire} from state"
        :ets.delete(@cache, cache_key)
        false
      else
        true
      end
    end)
    |> Enum.into(%{})

    Process.send_after(self(), :clenup, @timer)
    {:noreply, state}
  end

  def handle_info({:ttl, cache_key, expire}, state) do
    Logger.debug "Add cache_key:#{cache_key} with expire:#{expire} to state"
    {:noreply, Map.put(state, cache_key, expire)}
  end
end

I put this genserver under a supervisor and inside my repositories I do something like this:

defmodule Database.Repo.Category do
  alias Database.{Repo, Cache}
  alias Database.Schema.Category

  @ttl 3600

  def all do
    Cache.get("categories", fn ->
      Repo.all(Category)
    end, @ttl)
  end
end

So i decorate all functions which would hit the database with cache. If the key doesn’t exists will call the anonymous function. If you want something more complex you can take a look over git@github.com:sasa1977/con_cache.git

1 Like

Thanks everyone, that gives me stuff to chew on for sure.

I’d recommend Cachex, it has all the features con_cache has and a lot more. As I recall (wish I could remember where) sasa even said that Cachex should generally be used instead (I hope I’m not putting words in their mouth, but I’d swear I remember that…).

2 Likes
4 Likes

@peerreynders Ah! Hah thanks! That was driving me crazy. ^.^;