Yes, you are right. For small datasets it is doable. On the other hand if they are comparing it with
cassandra I just assumed they had “a lot” of data.
In my experience the
dets backend is not feasible. The small amounts of data it can keep and the overhead of managing table partitioning is not really worth it if you have large amounts of data. Just assume you have a dataset of 1TB, that in itself means 500 partitions and to cater for growth you want more than that. And because it performs best when you add partitions in the power of two the next step is 1024 partitions which you need to manage. The time it takes to change the number of partitions is also not very optimized. The process uses temporary ETS tables of bag or duplicated bag with an
O(N) behaviour. If you have more than 10-20K keys it will take forever to add a single fragment. (actually I have not tested this with
dets, only the
disc_copies backend so if it differs I may be wrong here)