:gen_tcp or ranch for TCP connections?

Actually I’m making a tcp server for a game, there is not a lot of tutorial out there about this kind of things, the most relevant I have seen are the Elixir oficial guide example:

This uses gen_tcp and Task to achieve the server

An example from David Beck:

This uses ranch to achieve the server, and shows interesting info about the code

And Servus:

That uses gen_tcp too, and have a fsm to handle the socket states pretty good example I think…

So here I don’t know what solution to choose, the two do the work that I want but I don’t know the advantages of each, can someone give me hand to choose a solution and why to choose it? (Based on the documentation of ranch it uses gen_tcp, but I just want to ask about differences/advantages)


PD: The protocol is binary


There might be not many tutorials, but there are quite a few real web servers in erlang which you can study.

I think elli is conceptually the simplest (and is very short ~2k loc with comments). It creates a process for each connection and does all the work inside that process (no message copying). If that’s something your game allows you to do, then its probably the simplest way to go.

It’s most definitely not what you need, but here you go anyway. (copy-pasted elli)

# server.ex

defmodule Server do
  use GenServer

  def start_link(opts) do
    case Keyword.get(opts, :name) do
      nil -> GenServer.start_link(__MODULE__, opts)
      name -> GenServer.start_link(__MODULE__, opts, name: name)

  def stop(server) do
    GenServer.call(server, :stop)

  def init(opts) do
    # Use the exit signal from the acceptor processes to know when they exit
    Process.flag(:trap_exit, true)

    callback      = Keyword.get(opts, :callback) # where the game logic might be
    ip_address    = Keyword.get(opts, :ip, {0, 0, 0, 0}) # can also be ipv6
    port          = Keyword.get(opts, :port, 4000)
    min_acceptors = Keyword.get(opts, :min_acceptors, 20)

    accept_timeout  = Keyword.get(opts, :accept_timeout,   10_000)
    request_timeout = Keyword.get(opts, :request_timeout,  60_000)

    options = [
      accept_timeout: accept_timeout,
      request_timeout: request_timeout

    {:ok, listen_socket} = Server.TCP.listen(port, [
      {:ip, ip_address},
      {:reuseaddr, true},
      {:packet, :raw},
      {:active, false}
    acceptors = :ets.new(:acceptors, [:private, :set])
    for _ <- 1..min_acceptors do
      add_acceptor(acceptors, listen_socket, options, callback)

    {:ok, %{
      socket: listen_socket,
      acceptors: acceptors,
      open_conns: 0,
      options: options,
      callback: callback

  def handle_call(:stop, _from, state) do
    {:stop, :normal, :ok, state}
  def handle_cast(:accepted, state) do
    {:noreply, add_acceptor(state)}

  def handle_cast(_msg, state) do
    {:noreply, state}

  def handle_info({:EXIT, pid, _reason}, state) do
    {:noreply, remove_acceptor(state, pid)}

  def terminate(_reason, _state), do: :ok

  def code_change(_old_vsn, state, _extra), do: {:ok, state}

  defp remove_acceptor(%{acceptors: acceptors, open_reqs: open_reqs} = state, pid) do
    :ets.delete(acceptors, pid)
    %{state | open_reqs: open_reqs - 1}

  defp add_acceptor(%{acceptors: acceptors, open_reqs: open_reqs,
                      socket: listen_socket, options: options, callback: callback} = state) do
    add_acceptor(acceptors, listen_socket, options, callback)
    %{state | open_reqs: open_reqs + 1}

  defp add_acceptor(acceptors, listen_socket, options, callback) do
    pid = Server.Connection.start_link(self(), listen_socket, options, callback)
    :ets.insert(acceptors, {pid})

# server/tcp.ex

defmodule Server.TCP do

  def listen(port, opts), do: :gen_tcp.listen(port, opts)

  def accept(listen_socket, server, timeout) do
    case :gen_tcp.accept(listen_socket, timeout) do
      {:ok, accept_socket} ->
        GenServer.cast(server, :accepted)
        {:ok, accept_socket}

      {:error, reason} ->
        {:error, reason}

  def recv(socket, size, timeout), do: :gen_tcp.recv(socket, size, timeout)

  def send(socket, data), do: :gen_tcp.send(socket, data)

  def close(socket), do: :gen_tcp.close(socket)

  def setopts(socket, opts), do: :inet.setopts(socket, opts)

  def peername(socket), do: :inet.peername(socket)

# server/connection.ex

defmodule Server.Connection do

  def start_link(server, listen_socket, options, callback) do
    :proc_lib.spawn_link(__MODULE__, :accept, [server, listen_socket, options, callback])

  @doc """
  From elli:
  Accept on the socket until a client connects. Handles the
  request, then loops if we're using keep alive or chunked
  transfer. If accept doesn't give us a socket within a configurable
  timeout, we loop to allow code upgrades of this module.
  def accept(server, listen_socket, options, callback) do
    case Server.TCP.accept(listen_socket, server, options[:accept_timeout]) do
      {:ok, accept_socket}    -> keepalive_loop(accept_socket, options, callback)
      {:error, :timeout}      -> accept(server, listen_socket, options, callback)
      {:error, :econnaborted} -> accept(server, listen_socket, options, callback)
      {:error, :closed}       -> :ok
      {:error, other}         -> exit({:error, other})

  @doc "Handle multiple requests on the same connection, ie. keep alive"
  def keepalive_loop(accept_socket, req_count \\ 0, buffer \\ <<>>, options, callback) do
    case handle_request(accept_socket, buffer, options, callback) do
      {:keepalive, buffer, conn} ->
        keepalive_loop(accept_socket, req_count + 1, buffer, options, callback)
      {:close, _, _} ->

  @doc """
  Handle a request that will possibly come on the
  socket. Returns the appropriate connection token (depending on your game protocol),
  :keepalive or :close, the socket, and any buffer containing (parts of) the next request.
  def handle_request(socket, buffer, opts, callback) do
    {request, buffer} = get_request(socket, buffer, opts) # depends on the protocol you use
    response = execute_callback(callback, request)
    handle_response(response, socket, buffer)

  def handle_response(response, socket, buffer) do
    send_response(response, socket)
    {close_or_keepalive(response), buffer}

  def send_response(response, socket) do
    response = ["my protocol ", "\r\n", response]

    case Server.TCP.send(socket, response) do
      :ok -> :ok
      {:error, closed} when closed in [:closed, :enotconn] -> :ok

  @doc "Retrieves the request line"
  def get_request(socket, buffer, options) do
    case Server.Packet.decode_packet(buffer) do
      {:more, _} ->
        case Server.TCP.recv(socket, 0, options[:request_timeout]) do
          {:ok, data} ->
            get_request(socket, buffer <> data, options)

          {:error, _} ->

      {:ok, request, rest} -> {request, rest}

      _ ->

You might also want to read http://dbeck.github.io/Wrapping-up-my-Elixir-TCP-experiments/, if you haven’t already.


AFAIK, Ranch just spawns up multiple :gen_tcp.accepts and lets you change the number of acceptors at runtime. In the scale of things, it’s going to be a very minor part of your server that will be easy to replace.

I’d personally roll my own, with a single accept worker (unless you know ahead of time you’ll need to support thousands of connections connecting at the exact same time) just so the full flow was crystal clear.

1 Like

I recently build a TCP server recently with the explicit goal of it having approachable code. I’ve used it for a few things and the source is pretty readable. Might be of interest

Ranch builds on top of gen_tcp and gen_ssl to provide features such as acceptor pool, connections pool, limiting maximum number of accepted connections, and organizing processes into a proper supervision subtree which can be inserted in arbitrary place in your own OTP application. For more info, I suggest reading through the user guide.

For any serious usage you’ll want most of these features. If you don’t choose ranch (or something similar), you’ll need to reinvent those wheels yourself. Given that ranch powers cowboy, and is therefore pretty well battle-tested, I think it’s better to use it. FWIW, in our prod system we accept raw TCP connections, and this is powered by ranch. I didn’t even consider going for plain gen_tcp/gen_ssl.


Thanks guys for all your answers!

@sasajuric definitely i’ll chose ranch for my project, you convice me :slight_smile:

@Crowdhailer Thanks to share! some pieces of code are pretty interesting I’ll keep in mind when I code ^^

@idi527 Thanks for the resources definitely useful :smiley:

@sasajuric I tried to google some of the stuff you mentioned but I failed on it. Too many unrelated links (probably I am using the wrong query)

I really don’t what you are talking about (or maybe I do but the names confused me) like acceptor pool and those type of stuff that you mentioned.

Could you post some links to the details for learning about it?

Right now I am trying to figure out why this person https://opencode.space/implementing-a-peer-to-peer-network-in-elixir-part-1 rolled with ranch_tcp instead of gen_tcp and if need ranch or no, because this person https://github.com/robinmonjo/coincoin/blob/master/apps/blockchain/lib/blockchain/p2p/server.ex used gen_tcp just fine.

So I am not sure what should I use for P2P server.

If you’re unsure what to use, then I’d say it’s better to use ranch, because you’ll end up reinventing some of its features. Given that ranch is built with a strong focus on efficient accepting of TCP connection, your implementation will likely be inferior :slight_smile:

For example, in order to accept an incoming connection, someone (some process) has to, well, accept it :slight_smile: This is done with :gen_tcp.accept/1. In a simple implementation, this can be done by a single process. A process can accept a connection, then spawn another process which talks to the client, and go back to accept another connection.

This is what the solution from your second link is doing here. This is already a lightweight reinvention of what ranch does for you, and it’s arguably an inferior version. The problem is that in this approach there is some window of time after the connection is accepted and the next time the process is ready to accept. This might become a bottleneck under some conditions.

A better approach is to have a pool of processes which are available to accept a connection. Then, as soon as one process accepts, others are immediately available to accept the next connection. This is precisely what ranch does. By default it starts 100 of acceptor processes, but you can easily change that number.

Once you have a pool of acceptors, you need to properly supervise it. This is also something that ranch takes care of for you. Going further, ranch also allows you to set the limit on the maximum number of concurrent connections.

You can of course develop all of this yourself using plain gen_tcp. After all, ranch is implemented in plain Erlang, and is using gen_tcp (and/or gen_ssl). My opinion is that in real production you likely want at least some of these features, and given how ranch is stable and battle-tested, it’s better to go with the proven solution, than reinvent that wheel.