Showing posts with label Database Modelling. Show all posts
Showing posts with label Database Modelling. Show all posts

Saturday, September 19, 2020

Online Transaction History - Database Design Strategies

In todays world of technology one of the common occurrence in financial services is the concept of omni Channel. The basic premise is that customers can access information related to their accounts(checking/savings/Credit/Debit/Mortgage) information through various channels such as:

1. Financial Centers
2. Online Banking
3. Mobile/Phone Applications
4. Statements related to accounts (Mailed)
5. SMS/Email (where applicable)

When information related to accounts is presented via different channels like above, it is critical/obvious to have the customer experience consistent. Now looking at the technologies that are utilized to solve the above problem/create such experiences, API's have made a tremendous amount of penetration. The API layer has succeeded in making the customer request from the client applications/Phone Apps very seamless. Now these API's have to have a very good response time, for example if i am looking at the balance of my account through a phone banking app, the results need to come back quickly. In case response times are slow it will lead to bad customer experience. It is very essential that the Data Services behind these API's are very efficient. This in turn translates to have a very good database design (The databases can be on perm or on the cloud). Lot of times when use the applications and go to financial centers we tend to take these response times for granted. Recently i had the opportunity to work on designing a solution for a online/mobile banking channel to display transaction/statement information.
The data was going to be accessed via calling API/Web services by the client applications. The data resided in a exadata oracle platform.

The information needed for providing transaction information was coming from a vendor which gets ingested into the exadata database. In order to provide the information to the client, a process had to be run on the production database to aggregate the transaction information. Now the challenge was when these processes are running, if a client tries to access his transaction information, how does one make sure there is no distortion or breaking of the service call. Information still needed to be provided to the customer and there cannot be a time lag. In order to achieve this we had 2 options:

1. Perform a SYNONYM SWAP as part of the Oracle Procedure that is aggregating the information. Basically in this scenario, see example below, available in link: https://dba.stackexchange.com/questions/177959/how-do-i-swap-tables-atomically-in-oracle
We went with this option, the data was reloaded everyday, but we started to service call failures only at the time when the synonym swap happened.
2. We used this option, Perform delta processing of records every day and merge the changes into main table, use batch sizes during the final merge so that records are ingested into the main table in small chunks and that should minimize any contention of resources. In this option we processed only changed/new records and we did not perform any synonym swap. In this option, though it took a little longer for the job to run complete, there was no distortion of the service and the sla was well within what the customer expected. In order to get the accounts that have changed, we used a table to maintain the tables that are involved in the processing and capture the accounts that have changed in those tables.

These were couple of options we experimented with and we went with Option 2. It is very critical to design your database according to the expectations of the online/mobile applications. We experimented with multiple options and we narrowed down to the 2 options mentioned above.
In case you happen to read this post in my blog and you have any other suggestions, please leave a comment and i will definitely look into it.


Thursday, September 20, 2012

Database Modelling...ER Studio...

One of the important aspects of database design is to model the database. There are two types of models one is logical model and the other is a physical model. It is important to model the data that needs to be analysed or reported. Modelling provides a abstract view of the tables present in the database, here in modelling terms we talk about entities and relationships. With good modelling one can understand the value of data beneath it and also provides a better view of the business that we are trying to capture in terms of the Data. Most of the times the models (ER) are reverse engineered, rarely do models are generated first and then database is designed. Sometimes one would have a fair idea of the entities that would be present in the database through a model. I recently started working on a data sourcing project and one the requirements was to have a ER model of the data that we are receiving from the source. In order to generate/create the ER model I decided to use the ER/Studio Data Architect Version 8.5.3 from Embarcadero technologies.
http://www.embarcadero.com/.
The install was pretty straightforward. Once I launched the ER Studio, I realised that there lot of options available in this tool. There are 3 different options for creating a model:
Draw a new data Model: There are 2 options Relational,Dimensional
Reverse Engineer an existing Database: Here one would connect to a database and choose the tables that need to be modeled. This is one of the most used option.
Import Model From: This can be done on ERX file,SQL File and External MetaData.


I choose the option of Reverse Engineer an existing database, this is done by connecting to a sql server database which is set up using the ODBC setup option in the Tools menu in the main screen of the ER Studio.

Once I connected to the database I choose from the list of tables I needed to model and the ER Studio takes you through a couple of screens and finishes up generating the ER Model. In the Screens there are options to choose relationships based on Foreign Keys. Overall there are lot of features that are available in the ER Studio which makes it a full fledged modeling tool.