Is there a way to load a complex index from the database into redis?

Hello, everyone,

It is necessary to implement the generation of unique meetings.
Want to replace

Enum.find(tail, &check_unique_pair(head, &1))

but I don’t know what’s best.

----migration----------

create unique_index(:"meetups", [:meetuper1, :meetuper2])

-------model---------------------------------

  schema "meetups" do
    field :met_at, :naive_datetime
    field :meetuper1_rating, :integer
    field :meetuper2_rating, :integer
    field :average_rating, :float
    field :is_resolved, :boolean

    timestamps()

    belongs_to :user1, User, foreign_key: :meetuper1
    belongs_to :user2, User, foreign_key: :meetuper2
  end

 def active_users_query do
    query = from(u in User,
      where:
        u.is_status == true,
      select: u.id,
      order_by: u.id
      )
  end
  
  def unresolved_users_meetups_query() do
    first_meetupers = from(u in User,
      join: m1 in Meetup,
        on: u.id == m1.meetuper1,
      where: 
        m1.is_resolved == false and
        (m1.meetuper1 == u.id or
        m1.meetuper2 == u.id),
      select: u.id,   
    )

    second_meetupers = from(u in User,
      join: m2 in Meetup,
        on: u.id == m2.meetuper2,
      where: 
        m2.is_resolved == false and
        (m2.meetuper1 == u.id or
        m2.meetuper2 == u.id), 
      select: u.id,   
    )

    first_meetupers |> union(^second_meetupers)   
  end

  def users_meetup_query(meetuper1, meetuper2) do
    from(m in Meetup,
      where: 
        (m.meetuper1 == ^meetuper1 and
        m.meetuper2 == ^meetuper2) or
        (m.meetuper1 == ^meetuper2 and
        m.meetuper2 == ^meetuper1)
      )
  end

----context-------------------

 def generete_meetups() do
    active_users = 
      Meetup.active_users_query() 
      |> Repo.all()

    unresolved_users_meetups = 
      Meetup.unresolved_users_meetups_query()
      |> Repo.all()

    correct_users = active_users -- unresolved_users_meetups
    
    unique_pair = get_unique_pair(correct_users, [])
    create_unique_meetups(unique_pair)
  end

  def get_unique_pair([head | tail] = users, meetups_list) do
    case Enum.find(tail, &check_unique_pair(head, &1)) do
      nil -> get_unique_pair(tail, meetups_list)

      meetuper2 -> 
                  meetups_list = meetups_list ++ [head] ++ [meetuper2]

                  users = users -- [head]
                  users = users -- [meetuper2]


                  get_unique_pair(users, meetups_list)
    end
  end

  def get_unique_pair([], meetups_list) do 
    meetups_list
  end

  def check_unique_pair(meetuper1, meetuper2) do
    query = Meetup.users_meetup_query(meetuper1, meetuper2) 
    case Repo.one(query) do
      nil -> true

      _ -> false
    end
  end

  def create_unique_meetups([head | tail] = unique_pair) do
    now = NaiveDateTime.utc_now()
    {:ok, res} = NaiveDateTime.new(now.year, now.month, now.day, 14, 0, 0)
    meetuper1 = head
    [meetuper2, tail2] = tail
   
    create_meetup([meetuper1, meetuper2], %{met_at: res})
    create_unique_meetups(tail2)
  end

  def create_unique_meetups(single_meetuper) do
    {:ok, "all meetups create"}
  end

Indices belong to the database, you usually gain nothing by copying the index itself elsewhere.

Most databases don’t even let you access the actual index.

Maybe though I get you wrong.

Caching of individual rows should be totally doable.

Maybe its easier to create a new table or materialized view that acts like a more efficient index. Also, postgres supports partial index.

Anyway, it would be easier to help if you told us more about the problem.

Meta comment: it’s interesting how there are so many questions lately about caching database values on Redis. I wonder if they are related (same project? Same task?). If so, it would be great to have more context, because not knowing the goal, these architectures sound quite over engineered, as many people already noted.

I think people on the forum could provide better help, if we would know what the ultimate goal of this task is.

Also, if the questions are related, it might be better to track them in a single thread (or fewer ones), to avoid confusion and “noise” on the forum.

1 Like

I updated my post)

Which query or operation needs to be optimized? Is it the retrieval of meetups by participants?

Before you consider caching, make sure that your data model is well structured, and that you have the right DB indexes in place. Caching can be hard to implement right, and can often be a premature optimization.

I don’t get all the code but it seems solvable with more efficient query. How do you query this?
You should not query ALL meet-ups and then throw elixir on it to find the unique ones.
use SELECT DISTINCT or something like that.

1 Like

I updated my post)

I actually don’t like your datamodel and I think it will cause (more) problems down the line.

If you have meetups then I also think you should have a meetupers table that has a reference to the meetups table.

Then you can do a more performant query. Maybe something like:

SELECT * FROM meetups
INNER JOIN meetupers ON meetups.id = meetups_id AND meetuper = ^meetuper1
WHERE EXISTS(SELECT 1 FROM meetupers WHERE meetups.id = meetups_id AND meetuper = ^meetuper2)