On a server my Db will be shared with other application which is written in Rust. Some tables will probably be used exclusively by a Rust application only. Some, such as users table, will be used by both Rust and Elixir.
In this matters – in the table users there’ll be some columns that are needed only by one app, and the ones – by the other one. Meaning, a Rust or Elixir app will work with some of the columns it needs whist ignoring the rest of them, and vise versa.
So, could there arise any issues with all this jazz?
Anything that has to do with the client code in Elixir, with my Ecto models?
Of course there might be issues, at the very least you’re looking at stale data if one app updates shared table columns and the other got their values just a millisecond ago and then tries to update them.
But if they’re never updating the same table columns then the blast radius is limited to only the possibility of one app seeing slightly stale data.
Another potential issue is DB utilisation. If the server can take 200 connections at a time but your Elixir app has e.g. 100 of them but the Rust tries to reserve 150 then it is a problem. Though that can be tuned manually pretty easy.
In general, if your apps don’t assume too much about the database then it can work just fine. So they should probably not cache much from it (or not cache at all).
Database servers (generically speaking) don’t care what connects to them: they don’t differentiate between Elixir applications or Rust applications (or anything other kind of application). From the database server perspective it’s all just client connections where the clients connect and run queries.
To reinforce and build on what @dimitarvp has said, there are database usage issues to consider regardless of what application or technology is connecting to them as client. Most of this is about dealing with concurrent database accesses and the possibility of concurrent data changes which can happen. These issues can exist even within the same application; if you only had a single Elixir application, but it allowed different users in different sessions to modify the same record… you could end up with the “dirty write” scenario @dimitarvp spoke of if you didn’t manage such changes correctly in the application. The only thing that makes multiple applications accessing the same database/data any different is that managing the data access practices across multiple applications is inherently more difficult… maybe the different applications are developed/maintained by different developers or teams, etc. it isn’t a technology problem nearly so much as it’s a management problem in ensuring consistent definitions of what data access/change requirements need to be supported across the portfolio and then implementing them correctly.
If it’s any comfort, in enterprise scenarios, it’s not at all uncommon for a number of different applications and integration tools to access the same database and same data. Everything above applies: you have to manage it correctly.
I forgot to address this. Assuming that you’re using a very simple architecture of a single database with multiple clients accessing the database, rolling out migrations and deploying application code needs to be coordinated.
Naturally, if the migrations are originating in the Elixir application and changing tables and such that the Rust application knows about… if those changes are “breaking”, updates to the Rust application will need to be rolled out at basically the same time that the migrations role. If you’re trying to do online updates, first, the database will tend to be pretty resilient about handling those even if the one application is trying to access data… even if that data is in a table that’s changing (assuming PostgreSQL here). But with concurrent database changes and data accesses, you do start to get into murkier territory where a good understanding of the database locking semantics becomes important. Beyond that zero-downtime updates, gets kinda complex and there are different techniques and approaches which are very dependent on the details (thinking in general and not necessarily in the context of Elixir/Ecto migrations).
To be very specific regarding migrations, you should be fine as long as you do addition only to the users table and not modify any of the existing columns used by both of the applications.
Others have already pointed out connection pooling/utilization and potentially weird consistency issues if you’re not careful in how you’re using caching/read-after-write/stale reads. App A locks out a user, but App B has a cache of all users…etc. If you do a select then update, that could be a concurrency issue on a row if not locked properly. Etc, etc.
Could you just have three different databases? I one for Rust, one for Elixir and one for shared data like users table? Of course that means you can’t have foreign keys to shared tables but maybe it’s worth it to avoid connection limits and such. If you are using PostgreSQL you can also use advisory locks to lock specific data. By doing some hash data and have lock id. We do this in our app by using two 32bit number locking. First 32bit number is type of a lock and second is a murmurhash v3 minus 2147483648 to get it into 32bit integer range.
Edit: by hash I mean has of something unique like user id. If you are using 32bit integers for id you can also use that directly. And you have to have discipline to use advisory lock identically from both Rust and Elixir. If you are locking multiple thing you need to lock them always in same order to avoid deadlocks.
I’m not sure what database servers you are using but some do allow listening to events. So both Rust app and Elixir app could listen to those events when something is changed in user table for example and invalidate their cache. I haven’t used them with PostgreSQL but from looking at it it support NOTIFY and LISTEN that you could use to achieve this.
I think there’s a fair amount I disagree with regarding your recommendations. I will assume PostgreSQL in this discussion as phrases like “different databases” can have substantially different ramifications and meanings for different RDBMS products.
I actually think this suggestion increases the complexity significantly rather than reduces it. If you can split the Elixir & Rust databases into independent stack dependent silos… you already don’t have any significant data related contentions you need to worry about between the Elixir and Rust applications. At best you may relieve available connection contention, but you may not even do that (more on this later). But you now have other complexities to support the split. You may now need to prepare and deploy independent database migrations across the different databases; you do lose database integrity constraints like foreign keys… but you also lose the other consistency guarantees such as being able to perform database transactions arbitrarily across the data set meaning the application has to now implement any database crossing data integrity; some backup/restore/replication scenarios may become more complex depending on what we mean by “multiple databases”; and finally, the third database which both Elixir/Rust applications access, still requires you to solve the same concurrent database issues you have with a single database. In effect, the only issue you can solve with the suggestion is the connection count issue.
PostgreSQL has historically had issues with client database connections; and to be clear many other database products don’t suffer connections as a bottleneck the same way that PostgreSQL does. In PostgreSQL connections are resource intensive so the problem is really one of scalability. First, in recent versions of PostgreSQL significant progress on connection count issues has been greatly improved (for example: Improving Postgres Connection Scalability: Snapshots - Microsoft Community Hub). Second if this really becomes an issue, there are mature methods for dealing with the limitation (https://www.pgbouncer.org/). Third, remember that these limits are limits per server not per database; the only way you alleviate the connection limit issues by having multiple databases is if you have multiple database servers, one for each database. Needless to say, this ups the complexity a lot because now backups/restores/replication/etc. issues that on a single server can still be made to be reasonably consistent are now relatively intractable: these PostgreSQL clusters are fully independent including in their failure and recovery modes.
I’m going to bet that the application in question isn’t going to even approach the real connection constraints of a reasonably speced out PostgreSQL server (big assumption on my part). If it were the questioner would already have to be considering ways to manage the problem with things like PgBouncer and the like. This means the real issue that @dimitarvp was getting at in regard to connection limits is 1) a PostgreSQL configuration problem as the number of allowed connections is configurable; and 2) making sure that however Ecto on the Elixir is configured (connection pool size) and making sure the same metric on the Rust side aren’t overly aggressively consuming the available connections. This should be a fairly easy problem to solve. On the Elixir side the biggest issue (I think) is that the standard Elixir connection pools are going to take all the configured connections at the get-go, without any dynamic management related to actual load of the application: you say you need 100 connections, you’re going to make 100 connections whether anyone is using the application or not. Not sure on the Rust side… been quite awhile since I’ve looked there.
This is something that I agree with. If you have multiple clients and it’s important to, for example, only allow a single client application to update the data at anytime… you can allow that application to tell the database: “hey, I’m going to work with the data… let anybody else that might be interested of my intent.” It’s a solid technique, but just like anything dealing with concurrent access patterns, it needs to be used diligently. You can do things like hashing ids and such, but I’ve also seen it used in the mode where just designating the data type and a simple integer ID associated with that type is used: many viable roads to make advisory locks work well. The biggest issue in a enterprisy world is that many of the applications and tools that might concurrently access the data wouldn’t know about the advisory locks and they are just advisory… in those cases you’d have to use other guarantees.
Also, depending on access patterns, you could use the Ecto “optimistic locking” feature (Ecto.Changeset — Ecto v3.10.3) where you’re recording a row version on the record to recognize potentially dirty writes.
Yes, you absolutely have concurrency issues within a single application. This is part of the reason I think the Elixir vs. Rust part of this discussion isn’t really relevant or important; you can realize all concurrent data access issues from within the same application. If I were describing the scenario from the PostgreSQL perspective (and being a bit pedantic), I think I would say, “The Ecto application is making 10 client connections to the database.” But that implies just what you said. From the PostgreSQL perspective each connection is effectively a client with all the concurrency issues that 10 applications each consuming a single connection would present.
Actually, the quote you pulled is from the advisory locks part of the discussion… really the only time advisory locks are viable are within the same application or in the case where a common development organization has control of any other application which will access the database and can mandate that advisory locks be observed. Advisory locks are just that: advisory, not mandatory like the other kind of more traditional database locks.
Who knows… such contention might even be worse in an Ecto based application because enhanced concurrency is one of the big sells of Erlang/Elixir… the application may be more likely to have concurrent access patterns compared to something where concurrent application flow is more difficult to achieve.
Thinking about this a little more deeply… The number of client connections is a red herring, too. Many of the kinds of race conditions you need to be wary of when dealing with data access are because of concurrency in the application alone and can be realized even if your application only has a single database connection. This is discussed by others above where issues of caching are mentioned, but it’s worth reiterating here that many if not most data access race conditions are the result of in-application concurrent data access (in the context of this discussion) rather than anything to do with the database, database connections, or multiple clients specifically.
To be fair there are absolutely database concurrency issues unique to the database, but I don’t really think that’s the major concern in what we’ve been discussing.
Caching of data goes beyond database access, we cannot classify it as a race condition in terms of database access, because that is a second storage source. If we have 1 connection, we are kind of guaranteed to have a serial write/read in theory, assuming that a postgres connection doesn’t have any concurrency built-in.
If we talk in terms of rust and elixir having each 1 connection to the database, then the concurrent access is handled at DBMS level and has to respect those rules, the exact same case goes for ecto when it opens 10 connections to the database (assuming no cache is involved), so I would say you can treat those rust connections just as you treat ecto ones, there are no exceptions in this case.
Sharing a database means your programs effectively have a giant, sparsely-documented, shared API between them.
This is usually fine if things aren’t changing frequently, or if the team is small enough that knowledge about changes propagates efficiently.
Scaling on either of those dimensions will lead to the usual sorts of headaches, where a change to support application A breaks an assumption that application B depends on.
The general advice regarding this situation is “avoid it if you can”; many applications fall into this pattern because “this is easier than building an explicit API, and it’s just a temporary thing” turns into “now our applications are inseparably coupled together”.
You can configure the apps to use a certain number of connections and configure the DB to allow (at least) the sum of those two numbers.
If you’re using Ecto, you’d set the :pool_size config value to, ex. 100. I don’t how you’d do it in Rust with whatever library you might be using. Configuring your DB depends on what engine (and what version) you’re using, or the service providing it, but I’d be very surprised if that wasn’t documented!