I have recently been asked to add some tracking functionality to the changes made in a Ecto Database. At first, I planed to use a Ecto.Multi to wrap the Ecto functions but that will require lots of changes to the code base. Then, I found ex_audit which I tried in a demo environment and seems to be working. My concern about it is replacing the original Repo functions insert, update, delete… not sure how long this will last until a change in Ecto brakes the library which I am not sure if it is been maintained. I also found a post where @josevalim suggests someone to see if the prepare_changes function would work for his use case and I think it might work on my case.
Haven said that and just before I decide which way to go, I will like to ask you guys with more experience how have you solved this issue.
I don’t want to speak for José and his team, but in Dashbit’s Bytepack, they included an AuditLog struct and plugs, which you may find useful:
and, Hex.pm also has an implementation that is similar but different:
We’ve been using an adapted version of Bytepack’s AuditLog for I think about a year now, and it’s been working great for us. While we don’t query the data unless needed, Hex.pm’s version also has indexes on the JSONB column to speed that up.
I think the first main question you need to ask yourself is: Are you auditing changes to rows in tables, or are you auditing actions performed by users?
The distinction is this: When auditing users, my experience is that you are better off thinking in terms of logical operations (often at the API / controller layer) that users perform, the parameters that they pass in, and include information like ip address, request id, and authorization information. If you are auditing rows on tables, then yeah you’ll maybe want something Repo or trigger based.
In my experience is that most of the time you want an audit log you’re really trying to audit users, isn’t really well handled by the trigger or Repo based approaches. A single user operation will often touch multiple rows in multiple tables, and as your application develops you may need to use things like insert_all or update_all which are more difficult to track from a Repo perspective.
Yeah, for the user-level operation auditing purposes I’d just add an audit trail storing in a separate DB table inside the context functions. Everything that needs to interact with the persistent storage should only touch the context functions anyway.
So you just append a few lines of code in there and you’re set.
And those are good questions to address the problem. My answer is a bit of both. We need someone to blame (user actions) and what did they do (changes to the rows). I can propose the final user just to have a limited information about user information but knowing the users they always want more so I am planning for the hole enchilada.
So you are stating a very important issue and that’s what happens when a action/transaction touches multiple tables… still not sure how am I going to address this use case but carbonite seams appealing.