Best practices to avoid Blind SQL Injection

Recently we passed a security audit for a new customer that used some security analysis tools and they pointed out some Blind SQL Injection issues (at least in theory).

What do you use to mitigate those risks in a Phoenix application?
All data goes through an API that receives params like this:

  def kpi_data(conn, params) do
    subdomain =
    target =
      |> String.to_atom()
    items =
      apply(KPI, target, [subdomain, params])
    API.render_items(conn, items)

We do evaluate and discard any extra params and try to limit data in some parameters, but others are more open in nature or at least can receive a lot of entries (like items, or domains).
Our controller and functions clean it and load as WHERE clause in a SELECT statement.
If any params is utterly wrong, it throws an error and send a HTTP STATUS 500 error (And we follow “let it crash…”)

PS: We used sobelow and it dosen’t show any errors on those functions.

What are the best practices to avoid those risks?
Is there any other common functions / apps do you recommend?

Perhaps the “blind” here has meaning I haven’t heard before but SQL injection is not possible with the Ecto querying API. Ecto always uses parameterized queries, so values never end up in the SQL string at all.

You do have an issue with |> String.to_atom() since this lets a hostile API user use arbitrary amounts of memory, so that’s not great, but I’m not clear on how there’s an SQL issue.

EDIT: After reading it doesn’t look like you have to worry about this at all. These kinds of attacks are only possible if you interpolate user values into the SQL string, which Ecto never does.


You might want to use String.to_existing_atom/1 or whitelist what params["target"] can be.

1 Like

Thanks @idi527 and @benwilson512.

I dont use Ecto query yet, this is a different topic but the learning curve was very steep to me at the time (i’ve created some extra fun inside Repo and use a lot of Postgres JSONB functions).

So i’ve improved overall security by enhancing those parts:


Added HTTP Status code for NGINX 444.

  config :plug,
    :statuses, %{

checked allowed params

  def allowed(allowed_params, params) do
    |> x ->
      {x, Map.get(params, x)}
    |> Enum.filter(fn {_x, y} ->
      y != nil
    |> Enum.into(%{})

check allowed params values

  def params_values(params) when is_map(params) do
    regex =
    |> {_x, y} ->
      |> Regex.match?(y)
    |> Enum.member?(false)
    |> Kernel.not()

try execute query

try do
  items =
    apply(KPI, target, [subdomain, params])
  API.render_items(conn, items)
  _ ->

Render 444 response

  def render_halt(conn) do
    |> Conn.put_status(444)
    |> render_error(nil)

  def render_error(conn, status) do
    Changes.api_audit(conn, status)
    |> put_view(MyApp.ErrorsView)
    |> render("error.json", status: status)
    |> halt()

I confess I’m a little confused about what you’re concerned about here. You’re talking about SQL attacks but you aren’t showing any SQL you’re showing HTTP parameter validation which, while useful, is the LEAST reliable way to protect yourself from SQL attacks. If you aren’t using ecto, how are you running SQL queries?

Also you still have a memory leak here that a hostile party could use to make your system use arbitrary amounts of memory:

    target =
      |> String.to_atom()
1 Like

I don’t use Ecto Syntax for selects, straight SQL queries like this one:

query =
	FROM #{table1} AS t1
	#{AppCommon.filter_events("t1", "t3", "t4")}
_results =
	|> Repo.execute_all([]) 

def execute(sql, params) do
	SQL.query!(__MODULE__, sql, params)

Also you still have a memory leak here that a hostile party could use to make your system use arbitrary amounts of memory:

    target =
      |> String.to_atom()

Only allowed params are converted to atoms:

  def kpi_allowed_target(params) do
    allowed_targets =
    target =
    |> Enum.member?(target)
    |> case do
      true ->
        |> String.to_atom()
      false ->

As a minor note the execute function you have there seems the equivalent of the built in query! function. IE you can just do Repo.query!(sql, parameters).

What you have is either entirely fine or super dangerous depending on where the interpolation is coming from. If you make sure to NEVER interpolate user data then you’re safe. If you interpolate any user data you’re vulnerable to attack.

This is why the Ecto query DSL is nice, it can make sure at compile time that it’s impossible to interpolate user values.


@benwilson512 - could you unpack it a little more?

Atoms are Not GC’d, and therefore the BEAM puts a limit on how many are allowed to exist. This limit is 2**20 by default.

Therefore using String.to_atom/1 at runtime is considered a flaw.

Instead String.to_existing_atom/1 is preferred or even better an explicit conversion function built from a whitelist.


I understand that in the original context, that would theoretically allow third party to “bombard” the endpoint with random strings until the limit is reached, right? What happens then? Does the machine handle it somehow “gracefully”, or just quits with a kind of “LimitReachedError”?

The BEAM will do a hard exit.

Everything will crash. There is no chance to recover or to gracefully shut down anything.

From the view of your application it’s not different from a kill -9.

I see. Out of curiosity - is there a way to monitor the usage of “atoms quota” from within the application?


If you haven’t used Erlang before, it’s syntax is that identifiers starting in uppercase are variables, while those starting in lower case are atoms. But anyway, the count of atoms in the table (and the size) are buried in an obscure string, rather than document that here:

(I mention the erlang system_info just to point you to the fact that you can easily get at a huge amount of runtime stats…)

1 Like

@sribe - Truly valuable responses and links, thank you.