Authentication and authorization demystified by example and experience

For now, storing IPs in sessions is irritating, when on my phone I change my IP a lot, even mid transactions.

Cachex for Elixir is even faster. Redis is useless unless multiple languages are using it.

4 Likes

Where would you store the ips?

Thanks for mentioning cachex.

I wouldn’t store IPs at all except for logging purposes. :slightly_smiling_face:

For blocking users that are not logged would you use a service?

What do you mean “not logged”?

For users that are authenticated in the website that have no account no user session or any jwt token.

That’s just an Anon user, do whatever you would normally do. :slightly_smiling_face:

To block an anonymus user you can only block him via the ip right?

No? Why would you block by ip? If they don’t have a session or a token or anything then they are Anon. That will be handled by your authorization system already.

If the anonymus users does some nasty stuff like a ddos attack or other attaks how would you stop it?

Definitely not in the Elixir side! You want to kill that stuff at the load balancer, that’s part of it’s job!

2 Likes

Thanks

1 Like

Usually I start from this:

defmodule Repo.Migrations.CreateUsersTable do
  use Ecto.Migration

  def change do
    create table(:users) do
      add :email,              :string, null: false
      add :temp_email,         :string # for changing emails until confirmed
      
      add :password_hash,      :string, null: false
      add :temp_password,      :string # for changing passwords until confirmed
      
      add :name,               :string # or divided into first, last, etc
      add :profile_image,      :string

      add :verification_token, :string # for the signup and email verification - this can easily be other table just for the verifications
      add :verified,           :boolean, default: false
      add :locked,             :boolean, default: false

      #add :last_login,         :utc_datetime
      #add :logins,             :map, default: %{}, or {:array, :map}, default []

      timestamps(type: :utc_datetime)
    end

    create unique_index(:users, ["(lower(email))"], name: "users_email_index") #lower case in order to prevent different cased emails, same name as if it was the regular index on a field
  end
end

Then I add other relevant bits depending on what is needed.

The schema itself has at least two additional two additional virtual fields if using password based accounts:

defmodule User do
  use Ecto.Schema

  schema "users" do
    field :email,                 DowncasedString # this would probably be more resilient if done at the db level but if it's only you working with the db through the elixir app and changesets it's enough
    field :temp_email,            DowncasedString
    
    field :password_hash,         :string
    field :temp_password,         :string
    
    field :password,              :string, virtual: true
    field :password_confirmation, :string, virtual: true

    field :verification_token,    :string

    field :name,                  :string
    field :profile_image,         :string

    field :verified,              :boolean
    field :locked,                :boolean


    timestamps(type: :utc_datetime)
  end
end

password and password_confirmation as virtual fields for easier handling on changesets and so that the forms/payload can simply have those two fields in them.

For managing sessions I’ve played with mnesia and this works for having multiple nodes connected and sharing a store of valid & invalid tokens - not sure if it scales though, or if it’s the best approach, but wanted to do some testing with it. This is for a game, usually I wouldn’t implement it for more normal web apps.

defmodule SessionsManager do
  use GenServer
  require Logger

  alias Core.Runtime

  @cleanup_interval 120_000

  def start_link(_) do
    GenServer.start_link(__MODULE__, %{}, name: __MODULE__)
  end

  def init(_) do
    Process.send_after(self(), :cleanup, @cleanup_interval)
    {:ok, %{}}
  end

  def add_sibling_tokens(login_tuple) do
    GenServer.cast(__MODULE__, {:add_sibling_tokens, login_tuple})
  end

  def add_invalid_tokens(token, socket_token) do
    GenServer.cast(__MODULE__, {:add_invalid_tokens, token, socket_token})
  end

  def handle_info(:cleanup, state) do
    Task.start(&clean_up/0)
    Process.send_after(self(), :cleanup, @cleanup_interval)
    {:noreply, state}
  end


  def handle_cast({:add_invalid_tokens, token, socket_token}, state) do
    invalidate_tokens(token, socket_token)
    {:noreply, state}
  end

  def handle_cast({:add_sibling_tokens, {%{id: id}, token, socket_token}}, state) do
    fun = fn() ->
      now = :erlang.system_time(:second)
      :mnesia.write({:sessions_siblings, id, {token, socket_token}, now})
      :mnesia.write({:sessions_siblings, token, id, now})
      :mnesia.write({:sessions_siblings, socket_token, id, now})
    end

    case :mnesia.transaction(fun) do
      {:atomic, _} -> :ok
      {:aborted, reason} ->
        Logger.error("SessionsManager Error adding sibling tokens: #{inspect reason}")
    end
    {:noreply, state}
  end

  @spec invalidate_tokens(String.t(), String.t()) :: boolean()
  def invalidate_tokens(token, socket_token) do
    fun = fn() ->
      now = :erlang.system_time(:second)
      :mnesia.write({:sessions_invalid, token, now})
      :mnesia.write({:sessions_invalid, socket_token, now})
      case :mnesia.wread({:sessions_siblings, token}) do
        [] -> false
        [{_, id, _, _}] ->
          :mnesia.delete({:sessions_siblings, id})
          :mnesia.delete({:sessions_siblings, token})
          :mnesia.delete({:sessions_siblings, socket_token})
      end
    end

    case :mnesia.transaction(fun) do
      {:atomic, _} -> :ok
      {:aborted, reason} ->
        Logger.error("SessionsManager Error invalidating tokens: #{inspect reason}")
    end
  end

  def clean_up do
    case GenServer.whereis(SessionsManager) do
      nil -> :ok

      pid ->
        fun = fn() ->
          case :mnesia.select(:sessions_invalid, match_spec(:invalid)) do
            [] -> :ok
            
            tokens ->
                Enum.each(:lists.flatten(tokens), fn(token) ->
                  :mnesia.delete({:sessions_invalid, token})
                end)
          end
          case :mnesia.select(:sessions_siblings, match_spec(:siblings)) do
            [] -> :ok
              
            tokens ->
                Enum.each(tokens, fn([id, {token, socket_token}]) ->
                  :mnesia.delete({:sessions_siblings, id})
                  :mnesia.delete({:sessions_siblings, token})
                  :mnesia.delete({:sessions_siblings, socket_token})
                end)
          end
        end

        case :mnesia.transaction(fun) do
          {:atomic, _} -> :ok
          {:aborted, reason} ->
            Logger.error("SessionsManager Error cleaning up tokens: #{inspect reason}")
        end
    end
  end

  def match_spec(:invalid) do
    token_life = Runtime.token_validity()
    ttl_threshold = :erlang.system_time(:second) - token_life 
    [{{:_, :"$2", :"$3"}, [{:"=<", :"$3", ttl_threshold}], [:"$2"]}]
  end

  def match_spec(:siblings) do
    token_life = Runtime.token_validity() + 20
    ttl_threshold = :erlang.system_time(:second) - token_life
    [{{:_, :"$1", :"$2", :"$3"}, [{:"=<", :"$3", ttl_threshold}, {:is_tuple, :"$2"}], [[:"$1", :"$2"]]}]
  end
    
end

And I have like a basic plug for it, in this case it’s more convuluted because of the session tokens house keeping, usually I just use Phoenix.Tokens, authorize header.

defmodule Authorize.Plug do
  import Plug.Conn, only: [get_req_header: 2, assign: 3]

  alias Authorize.Helpers
  alias Core.Runtime
  
  def init(opts) do
    opts
  end

  @spec call(%Plug.Conn{}, any()) :: %Plug.Conn{}
  def call(conn, _) do
    case get_req_header(conn, "authorisation") do
      [] -> assign(conn, :user, false)
      ["Bearer " <> header] ->
        case valid_token(header) && Phoenix.Token.verify(conn, Helpers.salt(), header, max_age: Runtime.token_validity()) do
          {:ok, user} -> assign_valid_tokens(conn, user)
          {:error, code} -> assign_valid_tokens(conn, code)
          false -> assign_valid_tokens(conn, :invalid)
        end
    end
  end

  @spec assign_valid_tokens(%Plug.Conn{}, {:admin | :player, integer(), String.t()} | :expired | :invalid | false) :: %Plug.Conn{}
  def assign_valid_tokens(conn, {type, id, username}) do
    case get_sibling_tokens(id) do
      :invalid -> assign_valid_tokens(conn, :invalid)
      {token, socket_token} ->
        conn
        |> assign(:user, {type, id, username})
        |> assign(:token, token)
        |> assign(:socket_token, socket_token)
    end
  end

  def assign_valid_tokens(conn, reason) do
    conn |> assign(:user, reason) |> assign(:token, reason) |> assign(:socket_token, reason)
  end

  @spec valid_token(String.t()) :: boolean()
  def valid_token(token) do
    case :mnesia.dirty_read({:sessions_invalid, token}) do
      [] -> true
      _ -> false
    end
  end

  @spec get_sibling_tokens(integer()) :: :invalid | {String.t(), String.t()}
  def get_sibling_tokens(id) do
    case :mnesia.dirty_read({:sessions_siblings, id}) do
      [] -> :invalid
      [{_, _, {token, socket_token}, _}] -> {token, socket_token}
    end
  end

end

Then both login and logout do calls to the sessions manager gen_server. Expired tokens that a user hasn’t logout explicitly from the interface end up being rejected once their TTL expires.

For the mnesia part I now always create a “bootstrap”(per) in the umbrella, that is the only app started on the release, that sets up everything and connects nodes, and only then starts the remaining parts of the actual “application” (that are set to be :loaded, but not started on the release definition - in fact I have started doing that even if not using mnesia as it allows to control the startup flow).

The token storing like I said not sure how it works in terms of production ready, but all other things pretty much have worked fine and are quite simple.

(this was copy pasta and slightly changed so some things might not be 100% correct)

4 Likes

Thanks for providing you experience to this talk and also the use of gen servers.

But if I am allowed i would like to ask why not use agents in this case for the state of the tokens expiration?

I usually default to GenServers, to be sincere I don’t use agents that much, or gen_statem if it’s going to have a public “server” like interface but then do complex flows based on it. Agents are designed to work on their own state as you provide a function that takes the state and returns the new one - in this case the state is in mnesia. In the current stage the code I posted is, it wouldn’t even need to be encapsulated in a process - instead be just function calls since mnesia deals with locks, etc. But I was planning to add some more functionality to it.

3 Likes

Thank you for the explanation and also for sharing your code with me.

Will you make it an auth library in the future?

1 Like

Hmm, there’s already a few done and they seem to work fine - plus - authorization is a bit of a custom thing and the best approach depends on a lot of factors - then with like, Phoenix tokens, plug parsers, the bcrypt lib, ecto, I found it straightforward to implement it and then just re-use it with the tweaks I need. Doing a library that handles all those tweaks is fairly more complex though. If there’s interest I would gladly write a blog post on it - bearing in mind the way I do it is mostly used in the context of an elixir backend talking to a decoupled frontend, so the front-end just needs to store the necessary token(s) (and share them across browser tabs) and then include an “authorization” header in all requests - that way the backend can easily see if the user is logged in or not.

1 Like

I would be most definitely interested in a blog post on this and any other:

  • otp
  • genserver stuff implementation in a phoenix application.

My current interest is because of the following topics and technologies:

  • Absinthe

  • Dataloader

  • PWA

  • SSR

  • SEO

So if you could make a detailed guide how the back-end and front-end send data for your auth example.
I would really appreciate it.

Thanks for the follow up on my comment