Mnesia load records one by one or lazy like Stream

Hi, I have a Mnesia table in my project, and there is a node on my OTP which sends a request every 24 hours to clean expired data. but my table size on my disc is more than 1.8g and I Don’t want to use select function, because this table is big and I need a way that is lazy like Stream.map.

1 - how can delete expired data one by on after a timeout in my GenServer handel_info ? is this way right ?
or what is your suggestion to delete records lazy like Stream

2 - how can count all records on disc mnesia, is there a lazy way to count ?

Thanks

Code which I am using and I think it is not lazy:

Mnesia.transaction(fn ->
     Mnesia.select(Token, [{{Token, :"$1", :"$2", :"$3", :"$4", :"$5", :"$6"}, [], [:"$$"]}])
end)
|> case do
  {:atomic, []} -> "no token"
  {:atomic, data} ->

    Enum.map(data, fn [id, _user_id, _token, access_expires_in, _create_time, _os] ->
      if access_expires_in <= System.system_time(:second) do
        Mnesia.dirty_delete(Token, id)
      end
    end)

    
  _ -> "no token"
end

is there solution? :sob:

Have you checked this out? GitHub - Nebo15/ecto_mnesia: Ecto adapter for Mnesia Erlang term database.

But I am not seeing a support for Repo.stream, not on the first glance anyway, so it likely won’t help you. :frowning:

I don’t know anything about Mnesia. Does it support any streaming?

Blindly shooting in the dark, you can also check the Mnesia GitHub topic?

1 Like

Regarding your first question, you might want to use the Mnesia functions first/1 and next/2 (or dirty_first/1 and dirty_next/2 if you don’t want to use a transaction).

Alternatively, foldr/ and foldl/3 can also be used to browse a whole table.

4 Likes