- Newest
- Most votes
- Most comments
Multi-tenant DB architecture will come down to their use case, as you've hinted to. There are lots of papers/articles out there that discuss tradeoffs with alternative approaches when it comes to isolation, maintainability, migrations, development pain, schema changes, and other factors.
Given that, I'll try and focus my answer on 1&2 in your question: We certainly have customers on both Aurora and RDS Postgres with 10k databases. Some even orders of magnitude more. It works. However, they'd certainly want to think about their future plans for scaling. There is a point where even just the sheer number of files generally causes Postgres (or the OS) grief. If their schema is complex, with many relations per database, then they could run in to issues much sooner. It's tough to give solid numbers, but at 10k you could say it's feasible enough for them to put in the effort to do a real evaluation.
Another potential sticking point is connection management. If their workload calls for frequent access to many customers/databases, it's worth mentioning that Postgres is connection-per-database. Even at 10k, they won't realistically be able to pool connections for all customers. Creating and tearing down connections will introduce overhead they wouldn't otherwise encounter with something like table-per-customer.
There are a lot of other things to consider as I mentioned, but yes, people do 10k databases. As always, they should test using something representative of their specific workload.
Relevant content
- asked 2 years ago
- asked a year ago
- AWS OFFICIALUpdated 3 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago
- AWS OFFICIALUpdated 2 years ago