Hi @tduccuong ,
Yes, as you said, the recommended way to create a secondary index is to create a second collection in the same database, keyed by the index field, and using “primary keys” as values.
The main collection and its index can be updated together in atomic transactions (I am typing this from a phone, so I will definitely make some mistakes, but hopefully it gives you the gist of it):
# This implements:
# - a collection of users by ID, with keys like {:users, id} and %User{} structs as values
# - a collection, implementing the secondary index, of users by name,
# with keys like {:users_name_idx, name} and lists of user IDs as values
def insert_user(db, user) do
CubDB.transaction(db, fn tx ->
tx = CubDB.Tx.put(tx, {:users, user.id}, user)
ids = CubDB.Tx.get(tx, {:users_name_idx, user.name}, [])
tx = CubDB.Tx.put(tx, {:users_name_idx, user.name}, [user.id | ids])
{:commit, tx, user}
end)
end
def get_user_by_id(db, id) do
CubDB.get(db, {:users, id})
end
def get_users_by_name(db, name) do
CubDB.with_snapshot(db, fn snap ->
ids = CubDB.Snapshot.get(snap, {:users_name_idx, name})
keys = Enum.map(ids, fn id -> {:users, id} end)
CubDB.Snapshot.get_multi(snap, keys) |> Map.values()
end)
end
At the moment there is no plan to implement secondary indexes as a first class concept in CubDB
. That’s because CubDB
strives to be first of all a simple, minimal, and versatile building block for higher level products.
That said, I would be lying if I said that I am not hoping to find the time to build a higher level library on top of it, providing facilities like “tables” and indices
but if you really depend on that level of abstraction, it’s probably better to use an SQL solution like SQLite.