Question on overall design

0

Hello All, We want some guidance, if the below design looks okay for our customer use case. And if someone already implemented such thing..

We currently have financial systems transaction data streams to Oracle exadata(X9) on-premise. This database supports processing of 400million transactions per day. A single transaction for us is a combination of 7-8 inserts into different transaction tables with Indexes , unique constraints etc defined on those. The transactions processed/committed in batches(~1000 batch size) in the database. And this system persists data for ~6 months. We do have all sorts of OLAP(daily/monthly batch reports running) applications run on the same database along with some user facing UI applications showing customer transactions. So it's basically currently serving a hybrid workload and is one stop solution for all use cases.

Many of the applications are moving from on premise to AWS cloud as part of modernization journey and AWS being chosen cloud partner also the product is expected to expand across more regions and this system is expected to serve increase in the transaction volume. And also we have a requirement to persist transaction data for ~10years to have those available for analytics/data science use cases.

So the team is thinking of splitting it into two parts 1)OLTP type use case in which we will persist/write the transaction data faster and show it to the UI related apps , in near real time/quickest possible time. and this database will store Max 60-90 days of transaction data. Not sure if we have an option of Oracle exadata equivalent on AWS, so team planning of using/experimenting with Aurora postgres. Please correct me, if there are any other options we should use otherwise?

2)Then move the data beyond ~90 days into another database or object storage S3 which will keep it there for ~10 years and will be queryable using the necessary API's. That is supposed to cater to Olap/analytics/data science use cases etc.

Is the above design is okay? and also in regards to the second point above i.e. persisting the historical data (that to be in queryable state), should we go for some database like snowflake or should just keep it on S3 as is and make those queryable through APIs. Please advice?

Veem
asked 3 months ago133 views
2 Answers
1

Overall the approach of using Aurora for a highly OLTP system is the right approach. You can leverage your AWS account team who can bring in additional resources as needed from within AWS and also professional services like certified partners if needed. Your AWS team can work with you to come up with an initial design for your overall architecture, and subsequently can conduct technical deep dive via Workshops and Immersion Days so you gain a better understanding of particular AWS services. Also, a PoC can go a long way to validate the capabilities of a service using you own data OR even a pilot/prototype where you pick an end-to-end use case and not only design but also build it out.

As far as historical data goes, I suggest you leverage the S3 capabilities for data archival and do not narrow down to a singular service like Snowflake just yet. Having data on S3 in open formats, be it file formats like Parquet or table formats like Iceberg, gives you the freedom to choose any compute of your choice for querying the data. You can choose Athena, EMR, Redshift or even partner services like Snowflake - pick whichever service meets your price-performance needs and feel free to swap one out for another if required.

You can also leverage public resources to familiarize yourself with strategies, guides, and patterns to help accelerate your cloud migration, modernization, and optimization projects

  1. AWS Prescriptive Guidance (https://aws.amazon.com/prescriptive-guidance/)
  2. Large migrations to the AWS Cloud (https://aws.amazon.com/prescriptive-guidance/large-migrations)
  3. Migration tools (https://aws.amazon.com/prescriptive-guidance/migration-tools/)
profile pictureAWS
answered 3 months ago
  • Thanks you so much. Heard about Vaccuming to be a problem in highly transactional system in postgresql. So , is that still problem in Aurora postgresql or that has been take care of? And also higher number of partitions causing issues in parsing time for queries, something highlighted in this blog. https://www.kylehailey.com/post/postgres-partition-pains-lockmanager-waits

    What about just using redis like cache DB on top of snowflake database, rather having a separate OLTP database like postgresql for catering OLTP use case?

  • Amazon Aurora is being used by customers big and small (refer https://aws.amazon.com/rds/aurora/customers/) to successfully implement highly transactional system. You can connect to an AWS Specialist using https://aws.amazon.com/contact-us/sales-support-rds/ and learn more about Amazon Aurora.

    Cache databases are intended for caching data and the cost-per-gb is very different for those compared to OLTP databases. AWS purpose built services like Amazon Aurora are specialized for OLTP use case.

0

Also want to understand, as we used to see the data dictionary views (called AWR views) in Oracle to see the current and historical performance statistics like CPU, IO , Memory usage, object level contentions etc. in the oracle database. Do we have such exposure to the views available (apart from performance insights UI tool) in Aurora postgresql, so as to manually fetch the performance and get some idea of how well the load test goes and what capacity is available or are we saturating it?

How to prove aurora postgresql is going to serve the OLTP requirement here , similar to the current on premise Oracle exadata, for the OLTP use case? We are expecting ~15K TPS write and 2K TPS read and the response for the UI queries the response expected are within seconds. But yes, as mentioned individual transactions will be batched and then will be written to the database, so this should have lesser resource consumption and contention created.

To test if Aurora postgresql will be comparable to cater the above needs (in regards to the expected performance with nominal cost) ,how should we test it? As we won't be able to test everything right away, Should we test basic read and write performance and benchmark to have some confidence and go ahead with development?

Say for example if one transaction consists of ~8 Inserts we can create a sample target table on aurora Postgresql with required indexes/constraints and try running those inserts from multiple threads(for concurrency) using blazemeter and see/compare the response time, CPU, Memory etc. Similarly to see read performance we can run multiple select queries from blazemeter and compare the response time.

Is this above the correct approach for deciding the database here?

And another question coming to mind, I read in past Vaccum to be a problem in postgresql, is going to give trouble in Aurora prostgresql too, for such a highly transactional read/write system? How to validate that?

Veem
answered 3 months ago

You are not logged in. Log in to post an answer.

A good answer clearly answers the question and provides constructive feedback and encourages professional growth in the question asker.

Guidelines for Answering Questions