Saturday, September 19, 2020

Online Transaction History - Database Design Strategies

In todays world of technology one of the common occurrence in financial services is the concept of omni Channel. The basic premise is that customers can access information related to their accounts(checking/savings/Credit/Debit/Mortgage) information through various channels such as:

1. Financial Centers
2. Online Banking
3. Mobile/Phone Applications
4. Statements related to accounts (Mailed)
5. SMS/Email (where applicable)

When information related to accounts is presented via different channels like above, it is critical/obvious to have the customer experience consistent. Now looking at the technologies that are utilized to solve the above problem/create such experiences, API's have made a tremendous amount of penetration. The API layer has succeeded in making the customer request from the client applications/Phone Apps very seamless. Now these API's have to have a very good response time, for example if i am looking at the balance of my account through a phone banking app, the results need to come back quickly. In case response times are slow it will lead to bad customer experience. It is very essential that the Data Services behind these API's are very efficient. This in turn translates to have a very good database design (The databases can be on perm or on the cloud). Lot of times when use the applications and go to financial centers we tend to take these response times for granted. Recently i had the opportunity to work on designing a solution for a online/mobile banking channel to display transaction/statement information.
The data was going to be accessed via calling API/Web services by the client applications. The data resided in a exadata oracle platform.

The information needed for providing transaction information was coming from a vendor which gets ingested into the exadata database. In order to provide the information to the client, a process had to be run on the production database to aggregate the transaction information. Now the challenge was when these processes are running, if a client tries to access his transaction information, how does one make sure there is no distortion or breaking of the service call. Information still needed to be provided to the customer and there cannot be a time lag. In order to achieve this we had 2 options:

1. Perform a SYNONYM SWAP as part of the Oracle Procedure that is aggregating the information. Basically in this scenario, see example below, available in link: https://dba.stackexchange.com/questions/177959/how-do-i-swap-tables-atomically-in-oracle
We went with this option, the data was reloaded everyday, but we started to service call failures only at the time when the synonym swap happened.
2. We used this option, Perform delta processing of records every day and merge the changes into main table, use batch sizes during the final merge so that records are ingested into the main table in small chunks and that should minimize any contention of resources. In this option we processed only changed/new records and we did not perform any synonym swap. In this option, though it took a little longer for the job to run complete, there was no distortion of the service and the sla was well within what the customer expected. In order to get the accounts that have changed, we used a table to maintain the tables that are involved in the processing and capture the accounts that have changed in those tables.

These were couple of options we experimented with and we went with Option 2. It is very critical to design your database according to the expectations of the online/mobile applications. We experimented with multiple options and we narrowed down to the 2 options mentioned above.
In case you happen to read this post in my blog and you have any other suggestions, please leave a comment and i will definitely look into it.


No comments:

Post a Comment