Getting Primary Key as NULL after creating a new user?

I was working with absinthe for my new side project but i am getting primary key as null when i register the user but when i return all the users with another query i am getting the id as well. I have no idea how to fix this.

This is my Absinthe Schema

defmodule FlickrWeb.Schema do

use Absinthe.Schema
alias FlickrWeb.Resolvers
#import Types

## Reddit Part

    query do
        @desc "Just for testing" 
        field :test , :string do
            "Test passed"

        @desc "All users" 
        field :allusers, list_of(:user_type) do
    mutation do
        @desc "Register a new User"
        field :registeruser , :user_type do
        arg(:input , non_null(:register_input_type))


These are my types:-

defmodule FlickrWeb.Schema.Types.Usertype do

use Absinthe.Schema.Notation

object :user_type do
 field :id, :id
 field :first_name, :string
 field :last_name, :string
 field :email, :string
 field :username, :string

input_object :user_input_type do
    field :first_name , non_null(:string)
    field :last_name , non_null(:string)
    field :email , non_null(:string)
    field :password , non_null(:string)
    field :username, non_null(:string)

input_object :register_input_type do
    field :first_name , non_null(:string)
    field :last_name , non_null(:string)
    field :email , non_null(:string)
    field :password , non_null(:string)
    field :username, non_null(:string)


And this is my User Schema
defmodule Flickr.Schemas.User do

use Ecto.Schema
import Ecto.Changeset

@type t :: %__MODULE__{
    id: Ecto.UUID.t(),
    first_name: String.t(),
    last_name: String.t(),
    password: String.t(),
    email: String.t(),
    username: String.t(),
    avatarUrl: String.t()

@primary_key{:id , :binary_id , []}
schema "users" do
    field :first_name, :string
    field :last_name, :string
    field :email, :string
    field :username, :string
    field :password, :string
    field :avatarUrl, :string, default: ""

def insert_changeset(user , attrs) do
    |> cast(attrs , [:first_name, :last_name , :email , :username , :password , :avatarUrl])
    |> validate_required([:first_name, :last_name, :email , :username, :password, :avatarUrl])
    |> update_change(:email, &String.downcase(&1))
    |> unique_constraint(:email)
    |> unique_constraint(:username)
    |> validate_format(:email , ~r/
    |> validate_length(:username, min: 6 , max: 30)
    |> validate_length(:password, min: 6 , max: 20)
    |> hash_password

defp hash_password(changeset) do
    case changeset do
        %Ecto.Changeset{valid?: true, changes: %{password: password}} -> put_change(changeset, :password, Pbkdf2.hash_pwd_salt(password))
       _ -> changeset


This is my creating new user Function:-
defmodule Flickr.Mutations.User do

import Ecto.Query, warn: false
alias Flickr.Repo

alias Flickr.Schemas.User

def create_new_user(attrs \\ %{}) do
    |> User.insert_changeset(attrs)
    |> Repo.insert()


What database are you using? What does the database’s SQL shell shows?

I am using postgresql. The Id is generate automatically and it is visible in shell but it is not being returned after the registration query but when I use a seperate query i am getting the id. Not sure what is going wrong.

Is the ID returned when you run the create_new_user function in iex?

Seems to be the case. Here id is returned nil but id does not seem to be nil in my users table.

[rewriting my original response since it was originally poorly written]

Looking things over, it looks like your database is generating the id value (which is fine, I do it that way myself). Assuming that is true, try Repo.insert(returning: true) rather than Repo.insert().

When you insert a value into the database, the database typically doesn’t return anything to the application aside from some simple diagnostics (like how many rows were inserted). If you look at the INSERT query in your screenshot, you can see that 1) the insert isn’t providing the id value so that has to be generated in the database and 2) no RETURNING clause at the end of the INSERT, so only those diagnostic values are being sent back to the application, not any of the inserted data. That returning clause is the only way the application could know about the new id value.

Thanks a lot. it’s working now.
I just had a question though, i have used this same method for a couple of different projects as well and it worked and i never face any issues and i don’t know why it didn’t worked for this one.
Do you have any idea what could be the real problem why it was not working earlier?

How are you generating your id’s, seems that you have disabled the option autogenerate?

e.g. @primary_key {:id, :binary_id, autogenerate: true}

1 Like

This is a very useful article to know about some unexpected id behaviour

1 Like

As @hlx points out, somewhere along the way in your projects you likely have a difference in the autogenerate settings in this project vs. your other projects.

In your other projects where you’re not returning: true, you may still have database table definitions that default the id to a generated UUID/(other type) value, but those are only defaults. If Ecto is configured to itself generate the id value, the database will never try generate its own default value since you’ve provided a value through the application… and Ecto will have that id value already, obviating the need for returning: true in the insert/2 call.

Just have this exactly above each schema ... statements in your project and you should be fine:

  @primary_key {:id, :binary_id, autogenerate: true}
  @foreign_key_type :binary_id
  schema "users" do
    # ...

Just learned that PostgresSQL can auto-generate UUID primary key now. Does a UUID primary key serve any purpose other than avoiding conflict in case you scale to multiple databases?

In purpose the ID is somewhere public it avoids that the sequence is predictable.
If you want to hide how many are already existing.

I do this by using hashids on integer primary id. Cheaper to implement.

1 Like

Yep, PostgreSQL can default a column value either using a literal value or using the output of any function that returns the correct type; of course in the case of a primary key or other unique column the return value of the function must be unique or you’ll get some flavor of unique violation. The commonly used, though now old fashioned, serial/bigserial column types are really pseudo-types/syntactical sugar for generating both the column type of integer or bigint, creating a sequence, and setting the default value to the return value of the nextval(<sequence regclass>) function call.

Assuming a surrogate key as primary key, I have a number of reasons for preferring UUIDs over incrementing integers. One is the case you cite, but consider a case where you’re writing SQL by hand. I can’t tell you how many bugs I’ve seen of the type:

    (pohead_id bigserial PRIMARY KEY <columns>);

CREATE TABLE podetail 
    (podetail_id bigserial PRIMARY KEY, 
     podetail_pohead_id bigint REFERENCES pohead (pohead_id),

FROM pohead JOIN podetail ON pohead_id  = podetail_id;

When using integers for primary key, the bug above may actually produce a result… not the right result, but a result. When you do get a result from the bug above, it may be just close enough to the correct answer that you can go through quite a few operations disconnected in time making it all that much harder to track down the bug. With UUIDs that class of bug becomes pretty obvious pretty early because the process where the bug sits simply won’t itself produce any result where a result is expected

It is fair, however in practice:

  • for commonly used joins, I just create a view ahead of time and use the view in my code
  • in the context of Elixir, Ecto can effectively prevent this kind of bug.

I think UUID is an eye sore but it could be just me.

FWIW I agree but f.ex. @Exadra37 argued at length in favour of UUIDs, mostly citing enumeration attacks – like if the attacker knows you have /profiles/1/edit URLs, they will just make a script looping over all integers and attempt to find an authorization bug in your app that would allow them to edit an user profile.

And even if that fails every time for them they can DoS your app that way.

I’m semi-apathetic to the former problem and sympathetic to the latter. But I’d still put flood protection in front of my app and some ban rules.

50/50 though. I can clearly see the argument in favor of opaque and unpredictable IDs but I’m still skeptical how prevalent the prevented problem is in practice out there. Can’t hurt to protect yourself just in case, I suppose.

1 Like

I’ve seen this in practice. Our company was sourcing human resources management systems and one of the new/hot/disruptive/etc. SaaS companies was on the table. We noticed URLs of exactly that sort and also noticed the IDs sequentially incrementing. We trivially demonstrated that you get data you weren’t suppose to get just by manually manipulating the IDs in the URLs. Luckily, human resources data isn’t so sensitive… we didn’t buy that system :-).

1 Like

hashids can make integer id opaque, but it is a very weak kind of encryption. A determined hacker with access to enough hashids can recover the salt in use and do the enumeration attack.

I agree with you that flood protection shall be the first line of defense. UUID key can prevent the enumeration but not the DDoS.

1 Like

It’s not just you. They are an eyesore. But it’s about trade-offs.

Strong/static typing can be a pain in the ass to work with compared to weak/dynamic typing. Working with something like Rust’s borrow checking is more annoying than, say, working in a more permissive environment such as C++ or C.

However certain classes of errors are largely eliminated by working in the more constraining environment, but at the cost of some ergonomic loss. Sometimes those trade-offs are worth it: JavaScript, Python, PHP, … Elixir & Erlang… etc. do have their place and time compared to less forgiving languages like Rust.

I find that UUIDs can strongly prevent certain classes of data management & maintenance problems at the cost of being harder to deal with on an intuitive basis. In applications with less demanding requirements of data persistence… sure, use the integer keys. Some classes of applications aren’t as conducive to simpler remediations, like canned views, in enough cases to avoid the issues of integer based keys and the trade off swings to a UUID class of identifier. I am usually working in environments where data retention is complex and the required certainty around data management is sufficiently justified to warrant the cost of UUIDs (or similar).

I’ll get off the soapbox here, we’ve derailed the conversation a bit, but I do think it’s useful to think about these things.