Want to know count of current users reading “EACH” post

I have some kind of dashboard that show me all listed jobs I have on my app. I want to implement count of anonymous users/viewers who currently read each job

Also I want to implement count of views? how to distinguish between users or readers? I want to know how many user/viewer saw or read each job? I have in my database views column for each job, but how to increment it? Should I put some random user_id for each one come to my app and check if that user_id came to that page before and count it or not?

and I meant by anonymous that they are not registered in my app so I don’t have a way to get their user_id or something, this is not the issue, the issue is how I can subscribe for many JOBS events at the same time. this picture is from the job_live/index.ex live view and each job has its own job_live/show.ex live view, when a user read the job he is served by the show.ex live view and I could manage to show the currently reading count for each job alone as shown down:

I am sorry if my questions are naive, but I really tried for some time now to solve these issues and searched a lot but couldn’t manage to solve them.

Thanks a lot.

You may want to look at Presence, which is built on top of Channels They are not simple to grasp but the time spent to study them will be well worth it.

1 Like

It is like implementing a like feature…

You just need join tables

with viewer_Id, post_id
and user_Id, post_id

usually with a unique multiple key on both ids

And if You want to read fast, You can also implement some counter cache for this.

Yes it I used it to implement the currently reading count per post. Problem is how to make the index.ex live view that show all jobs to get notified about any user read or explore any job that is shown in the index.ex live view. Say I have 4 jobs, one has 4 users explore it and one has 10 and others has different numbers, how to show all of these in the same time in index.ex. Because I need to make index.ex subscribe to different topics at the same time and each topic is the one that the show.ex live view broadcast to it so at the end all these broadcasting for different show.ex come to index.ex to show all readers counts.

Publish the events on pubsub, and subscribe from liveview.

I think you didn’t get the problem. The problem is these jobs are dynamic, means I delete one, create another one and so on. To publish events on PubSub, I need a topic to publish on it, and this topic must be different for each job I think so I will get the right count for each job. If I made all jobs on the same topic you will find the reading count are the same across all jobs and this is not right. I need a way to depend for example on job_id to create unique topic for each job and to make index.ex that list all jobs to subscribe to each of these unique topics automatically when there is any job created. I hope I clarify now better.

I think I did… because I have already done this :slight_smile:

1 Like

OK then could you please explain me in little details how you could manage to solve that?

You say that if you made all jobs on the same topic, the reading_count will be the same, but this is not necessarily true. When you pass the PubSub message, it can have a payload which includes the job_id. Then when you handle the message, you use the job_id to adjust the appropriate count within assigns/cache.

2 Likes

I don’t have viewer_id or user_id, it is public website that any user can browse and read about listed jobs.

Aaah … interesting. I didn’t know about passing messages to PubSub. I will read more about this, if so then this can solve the problem really.

This is the kind of message I send…

%{
        topic: "schedule",
        message: %{type: :recording_started, id: &1.id}
      }

It contains the type, and the id…

      user
      |> Accounts.base_change_user()
      |> Ecto.Changeset.put_assoc(:liked_events, [event | user.liked_events])
      |> Ecto.Changeset.prepare_changes(fn changeset ->
        query = from Event, where: [id: ^event.id]
        changeset.repo.update_all(query, inc: [likes_count: 1])
        changeset
      end)

This is the kind of query I use to like, and update the counter cache.

Then use a session_id, that You can generate and use… When anonymous user comes in, I add a uuid to the session. And maybe persists it in a token.

At least this will avoid to increment views when people reload their page.

1 Like

Really appreciate your reply and its clarity. Thanks a lot, I will give it a try.

This is a plug to ensure a session has an id…

defmodule KokoWeb.Plugs.EnsureSessionId do
  @moduledoc """
  Add a session_id to conn and session
  """
  import Plug.Conn

  def init(opts), do: opts

  def call(conn, _opts) do
    case get_session(conn, :session_id) do
      nil ->
        session_id = Ecto.UUID.generate()

        conn
        |> put_session(:session_id, session_id)
        |> assign(:session_id, session_id)

      session_id ->
        conn
        |> assign(:session_id, session_id)
    end
  end
end
2 Likes

Actually I use Presence to show the current viewers/readers for each job, not PubSub, sorry I confused your answer. That’s why I wonder when you said message I said I didn’t know about passing messages to Presence, but I wrote PubSub. But in general I will try what is mentioned in replies here and see what I will come up with.

I use presence like so:

def maybe_count_job_viewer(socket, job) do
    topic = "job:#{job.id}"

    initial_count =
      Presence.list(topic)
      |> map_size()

    if connected?(socket) do
      Endpoint.subscribe(topic)
      Presence.track(self(), topic, socket.id, %{})
      assign(socket, :viewer_count, initial_count)
    else
      socket
    end
  end

and in show.ex I handle presence like so:

 def handle_info(
        %{event: "presence_diff", payload: %{joins: joins, leaves: leaves}},
        %{assigns: %{viewer_count: count}} = socket
      ) do
    current_viewers_count = count + map_size(joins) - map_size(leaves)
    {:noreply, assign(socket, viewer_count: current_viewers_count)}
  end