Tuesday, December 15, 2020

AI/ML Framework

There is a lot of focus on Artificial Intelligence, Machine Learning in various organizations and they want to get better insights and value out of such efforts. There are significant investments made in terms of time, money to get AI projects started and delivered. One of the key aspects to kept in mind , the success of such projects depends on sourcing the right date, have the right data governance and make sure such efforts align with the business goals. Given such a environment it is imperative that there is a general framework in place as to how AI Projects are executed, so that there are truly value added deliverables being achieved. In a Data world we have the following broad roles trying to make a AI Project a success.

Possible Personas in a data project.

1. Data Engineers
2. Data Analysts
3. Data Scientists
4. AI/ML Developers, Model Creators
5. End users/Reviewers of Model for Compliance and regulations

Each of the above personas are going to be involved in different stages of a data project and would be utilizing different tools to achieve the end goal. When we talk about different stages we could identify that the following steps would be there in any data project, they are:

1. Sourcing data - Sample Tools IBM DataStage, Informatica, Structured Database Oracle, SQL Server, Teradata, Unstructured Data: Infobase
2. Organize data/Metadata Management/Lineage Analysis/Cataloging Trifacta/Alteryx for Data Wrangling/Prep, Data Catalog/Governance Atlan/Collibra
3. Build Model/Experiments/Analysis - SAS, JUPYTER (Python), R, H2O
4. Quality Check, Deployments of ML Models - Model Frameworks Such as H20, Python, R
5. Consumption of Data By end Users - Tableau, Cognos, Microstrategy

All of the above would require  storage and consumption component, these could be Hadoop/Spark, AWS/SnowFlake, Azure or Google Cloud. 

When you organize the different tool sets and personas it will provide an overview of what is available and how AI projects could be structured to deliver effective value. For example if there is a lot of need to consume unstructured data and derive data from it in a financial institution for example, once could look at leveraging the Instabase Platform : Quoting from the site: The biggest opportunity for efficiency within a business is stifled by document processes that only humans can do. With Instabase, you can build workflows to access information in complex documents and automate business processes.https://about.instabase.com/.  
Providing a platform where all of these tools can be used so that different personas can have access and move data from one point to another would be of great help for any data project. This would allow consistency of operation , help in tracing data, provide better model validation, make sure any audit/compliance requirements are met.
Successful data projects that include AI/ML have the above ingredients in the right mix, well tracked/cataloged and also take care of changes in data over time plus closely align with the business objectives.
Happy Holidays and  a Very Happy, pandemic free New Year 2021, Stay safe everyone.

Monday, December 7, 2020

Snowflake - UI Components

Snowflake is fast becoming a very important component of the cloud migration strategies in the business/technology world today. I have been in discussion with different leaders, also in participating in different conferences there is a lot of interest in Snowflake. One good thing is that there is a trial period for snowflake which one can sign up for. You can go to the Snowflake web site and sign up for the trial. The sign up process for the trial is very straightforward and once you have it set up, you are provided with an option to go through the introductory material related to snowflake and there are some very good documentation related to different areas in Snowflake. Once you sign into snowflake you should see the following interface. There are Different components that are listed in the interface, lot of it looks very similar to the sql server management studio layout especially the object explorer. The main components in the UI are:

1. Database - Lists the Databases in Snowflake Instance.
2. Shares - This is related to Data Sharing within your Organization.
3. Data Marketplace - This option allows one to look at the Snowflake Data Marketplace, which lists the public data sources that are available to different categories like Government, Financial, Health, Sports
4. Warehouses - Lists the warehouses available on the snowflake instance.
5. Worksheets - The option where one can write SQL Queries like below. There is a sample database called DEMO DB which has list of tables that can be used for querying. Also these queries can be used for building out Dashboards that is available in the Data Market place option.

One can also load data into snowflake based on the different data loading strategies that are available which has been discussed in earlier blog post written by me: https://www.blogger.com/blog/post/edit/2437651727370625818/3834832172078982453

Wednesday, November 18, 2020

Data Cloud

 One of the more commonly used buzz words these days is Data Cloud, it has been used as a marketing term mainly in the Cloud Domain across different business/organizations. There is a concept underneath the word Data cloud, it is mainly aimed at having data available in/migrated in the public cloud offerings such as Amazon, azure, Google Cloud. One of the key projects that have been undertaken by lots of organizations across different Business is how does one have data available in the cloud without compromising on the security aspect of data. I had the opportunity to attend listen to the data cloud summit 2020 organized by Snowflake. It was a virtual event organized by Snowflake where features of the snowflake was discussed in different sessions. There were also use cases presented by different Customers, Vendor Partners on how they are utilizing snowflake for the data projects and how much impact has this product had on their Business. There were some interesting points that i had picked up from different Sessions, i am listing them below. They cover a variety of topics related to Data.

1. Compute/Storage: Snowflake Separates compute from Storage, this is one of the main concepts in this product which has been highlighted by lot of customers. How this concept helps them in their daily data operations and business.
2. Scalability - Ability to ingest multiple workloads, this is a common requirement across all customers.
3. Simplify: Simplification of the Data Pipeline. How can one get the raw data and turn them into actionable insights quickly, this is called the lapse time. One of the questions raised is that does all the transformation of the data happen during early hours in the morning? Can this be spread out or done in real time?
4. Data Silos: Breaking down Data silos is a significant effort that is being undertaken by different organizations. Data Silos has a direct/indirect impact on cost and efficiency in a very negative way. One of the reasons for using a product like Snowflake is to break down the data silios, having data data in one place. This would allow better understandability ad searchability for the data in an organization.
5. Proof of Value: Data cloud products or cloud offerings need to provide a proof of value. It has to be tangible for the business, how does the investment in cloud provide better results for the business.
6. Orchestration: Since there is movement to cloud infrastructure taking place at different pace, there needs to be better orchestration across multiple cloud installations. This can lead to better abstraction, this is the challenge lot of companies are facing today.
7. Data is an Asset: Data can be monetized in the following ways: Generating value for the Business, reducing costs.
8. Support: Snowflake provides good support tools, Cost Effective. Some of the customers explained how the uptime of snowflake has been very good in spite of the huge data loads coming into the system.
9. Data: What type of information needs to be sent out/provisioned. One of the guest mentioned there are two important aspects with respect to data: 1. Information that a person needs to know, 2, How the information will affect you.

Overall lot of information in a single day event, i am sure each one of the aspects mentioned above can lead to deeper discussions and/or projects. The event provided a overall perspective of where things are headed in the data space and how companies are planning their work in the coming years.

Monday, November 9, 2020

Product Manager - Thoughts/Observations

 One of the profession/roles that is talked about, discussed, in demand is the role of a product manager especially in the technology world. There is lot of enquiries or need for Product Managers, also the recent COVID crisis has challenged the business and hence lot of Product Managers also lost Jobs. A interesting trend i have noticed is that there are product managers who have a couple of years experience to folks who have 10 Years or more of work experience. It is a very broad spectrum and hence lot of questions are raised around who can be a good product Manager. Also noticed that there folks who want to become a Product Manager are the ones who do want to code in certain areas of business. Let me try to take a deeper look and pen down my Observations. In Some cases Product Manager roles have become glamorous in the sense that it feels nice to say one is a PM.

Product Manager in my discussions with colleagues/professionals is a very important and crucial role in a organization. The role is is at the intersection of the following:

1. Business
2. Technology
3. Customers

So a person who is approaching a PM Role needs to understand the dynamics of the above 3 components. There needs to be a understand of how the 3 components work together. What is the primary business of a n organization what products do they have for customers. Secondly what type of technology is used to build the products, thirdly who are your customers. In Summary one needs to understand high level picture plus also understand the details behind what is being delivered.

Let us dwell a little deeper into each of the components:
  • Business - Understand the business strategy for the company and the Line of Business. Get a gauge on the stakeholders, understand the budget/resources that could be available for you product in terms of development/research/maintenance,What are the interfacing units and dependencies. How is the company performing financially and the target markets.
  • Technology: Understanding The tools being used to develop the product, what type of vendor lock is there , is it based on a Open source Architecture with less of vendor Lock in. In terms of data, what are the data sources, are the data sources very disparate or well integrated. Are there opportunities to streamline the data. One important aspect that is being experimented today is if Product management can be totally driven, if decisions could be justified by data.This is going to be even important in a data/information filled world.
  • Customers: Getting continuous feedback from customers, conducting surveys/talking to customers to get feedback on product usage/issues faced. Conducting useability studies and getting them back into the product backlog. Adopting a agile approach to building a product/collaborating with them to get the proper engagement.
In Summary, Product Manager is a exciting but a challenging role, it is imperative that one has the proper grooming/mentoring to get to a PM Role. There is a lot of temptation to cut corners(Like i won't do certain things...) to achieve it, but the consequences can be devastating that could erode self confidence. It would best to have a plan of action, set of goals and work with a mentor to achieve the results.

Sunday, October 25, 2020

Unlocking Insights-Data Quality

 One of the main buzzword that we constantly hear is about insights or unlocking insights from data. This has been one of the main selling aspect when it comes to selling technology to the business. The sophistication of tools is a welcome feature to unearth the insights, at the same time what are the critical components in order to get meaningful insights? One of the fundamental requirement in order to get meaningful insights, is to what have a solid data pipeline end to end. In order to have this there needs to be the following in place:

1. Data Governance/Lineage.
2. Metadata Quality/Entities.
3. Valid Domain Value Chain.
4.  Customer Data (Profile/Accounts/Interactions via different Channels)
5. Data Quality including Workflow/Validation/Data Test Beds/Deployment.
6. Track Data Assets Related to a domain.
7. Business Data Owner - A Person or a group of people who can help identifying Business       purpose/meaning of all the data points in the Domain.
8. Ability to Handle Technical Debt - How to Systematically handle technical debt. A very common scenario in organizations grown by Merger and Acquisitions.
9. Scale, Share and Speed - Does the Architecture, Infrastructure available can handle the frequency/speed of data requests by business.

The elements mentioned above are very important, a good interplay of the above elements are needed in order to generate valid insights. For insights there are 2 main components
1. Insight Rules - Rules which are executed when certain events happen and certain business conditions are met.
2. Insight Triggers - Capture data points when certain events happen, for example there was a credit card transaction made at lowes or Home Depot, or someone paid a SAT entrance exam fee or a there was mobile deposit made. As part of this process there is also selection criteria around how are the transactions picked, also includes whether the insights are going to be triggered daily, weekly or a monthly basis.

The combination of the above 2 components can help generate insights, now assuming that the 8 elements mentioned above are satisfied or they are in place. It would be also advisable to categorize the Insights based on the Domain so that it would be easier to track and maintain the insights. There is constant mining of data that is being done, in order to generate accurate insights.
AI and ML are used very heavily when generating insights. The effectiveness of AI, ML becomes more apparent if the underlying data infrastructure is really solid.
The purpose of this blog post is to explain highlight the importance of solid data foundations needed to generate valuable insights for business and customers.

Wednesday, October 21, 2020

AI in Mortgage

 AI has been permeating different aspects of life, Business and Technology, there are more sophisticated implementations of AI seeing the light of the day. There have been gains made with AI in terms of value added proposition with different types of Business. One of the areas where there has been lot of discussions and debates about the use of AI has been in the field of  Mortgage. There have been lot of automated tools, chatbots, Quickens Rocket Mortgage and companies have been trying to implement their own versions of digital experience in the Mortgage Space.One of the challenges in Mortgage is that the processes still are complex, there are still traditional methods that are being adopted and a lot of dependencies given the wide nature of information that is needed for Mortgage. There are 3 components that need to come together in order to implement AI in Mortgage, they are People, Process and Technology. In Mortgage processes, when you apply for a loan or refinance a loan, usually there lot of documents that are needed. The processes for handling these have been sluggish to pretty decent, it does take a quite bit of time. Apps Like Rocket Mortgage and other bank offerings do seem to alleviate some of the pain points with respect to this process. The other aspect that been utilized to improve process efficiency is to move to the cloud platforms hopefully to streamline the data available from different data sources.

There are couple of ways to handle AI methods in Mortgage Space, one is to develop inhouse methods to use AI and ML techniques to automate mortgage process. The other option is to use any API available in a API marketplace and enhance the process. Given the recent developments in AI, Google has come up with a API called Lending DocAI,is meant to help mortgage companies speed up the process of evaluating a borrower’s income and asset documents, using specialized machine learning models to automate routine document reviews, it is mentioned here:https://techcrunch.com/2020/10/19/google-cloud-launches-lending-docai-its-first-dedicated-mortgage-industry-tool/. More details on the API is mentioned here:https://cloud.google.com/solutions/lending-doc-ai. It is good to see companies like google are coming up with industry specific API offerings which can help improve efficiencies. Expecting to see more on the same lines from other tech companies to solve business problems.

Friday, October 16, 2020

Workflow, Data Masking - Data Ops

 Dataops is becoming more prevalent in today's data driven projects, due to the speed at which these projects need to be executed and also be meaningful at the same time. There are tools in the Dataops space that are provide lot of different features, companies like Atlan, Zaloni are very popular in this space, in fact Atlan was named in the Gartner 2020 Data Ops Vendors list. Now coming to the different features needed in these tools, there are concepts that are becoming very important, those are Data Masking and Workflows. It is very well know that in Data Driven Projects testing with valid subsets of data becomes very important. One of the biggest challenges faced today in Data Projects is the availability of test data at the right time in order to test functionality, usually it takes a process to get test beds ready.

With Dataops tools, one of the features that is promised is Data Masking/Obfuscation which means the production data could be obfuscated and be made available quickly for testing. In the data masking process there is this concept of identifying data elements that are categorized as NPI or Confidential and obfuscating those elements. Dataops tools provide mechanism where masking can be done very quickly, this really helps the process of testing in test environments. The impacts become more visible when one is working on major projects where testing has to be done through multiple cycles and also if one is in a agile environment. One of the leading Data Analytics expert Sol Rashidi mentions about 3 S's - Speed, Scale and Shareability, these are what is expected from Data projects apart from providing Business Value. In order to Satisfy the above requirements, Data masking being made available in data Ops tools is very welcoming indeed.

The other concept i wanted to discuss here is the concept of Workflows in Dataops. When we look at the data flow in general, there are source systems, data is collected into a HUB/Datawarehouse and then data is provisioned out to different applications/consumers. In order to achieve this typically lot of time is spent in developing ETL flows, moving data into different databases and curate the data to be provisioned. This involves a lot of time, cost and infrastructure. In order to alleviate these challenges, Dataops tools today introduce a concept called Workflows.  The main concept here is to automate the flow of data from source to target, in addition to that also execute data quality rules, profile the data and prepare the data for consumption to various systems. Workflows do emphasize the importance of data quality checks which are much more than data validations, these can be customized to verify the type of data that need be to be present with each data attribute. When performing data quality checks in the workflow, the tools also provide the ability to set up Custom DQ Rule and provides Alerts which can be sent to teams who provide the data. There are a couple of vendors who offer the Workflow functionality, they are Zaloni Arena Product and Atlan has it in their Trail offering, hope to be in production soon. Working with quality is fundamental for any Data project, building a good framework with dataops tools provides the necessary Governance and Guardrails. Such concepts will go a long way in setting up quality data platforms which are very essential for AI and Machine Learning Initiatives.

Vendor Links:



Tuesday, October 13, 2020

Data Driven Culture/Product Management

 There are 2 topics i see discussed heavily today in my connections/network or summits/round tables, they are about implementing a data driven culture, how to generate valuable insights using the data, applying AI, Machine Learning. The other aspect being Product Management, there are lot of sessions/talks about this topic, also lot of people wanting to becoming Product Managers. In a sense it seems like Data Analytics, Data Scientists and Product Manager are very glamorous titles to have. They are very responsible positions and care needs to be taken to make sure that one develops the needed skills for the above jobs. I would like to dwell a little further into these positions.

Data Driven Culture is more easily said than done, it requires a combination top/down and a bottom up approach as well. There has to be a complete embracement of the ideology by the leadership/business and technology. Everyone needs to have the understanding of what needs to be done with the data, the end state of data projects and most importantly the willingness to collaborate. Such a culture would enable better architecting of the infrastructure, good data governance/management, ability to choose the right infrastructure and platform. The focus needs to be on the value add rather just simple cost cutting, there are going to be times where certain transitions could cost money but for a eventual payoff later.This also brings up the point. of ability to using AI rin a responsible manner.

Since there is a lot of emphasis on data, it also feeds into the aspect of Product Management. Data can be used very effectively to build products, get feedback on products. Data can be a strong asset to improve customer experience and also provided value add behind the features. The type of data being represented in the product or being used to build products indicates the importance of data. Data can help with quantifiable measures, which can help in gauging how well the product is doing. There are different ways of getting feedback like user surveys, hackathons combined with interviews which can be very useful for Product Management, Having/being aware of such techniques help in grooming oneself about product management. It is a very important role which is at the intersection of Business/Customers and Stakeholders.

Product Management and Data Ops/Data Driven Culture will increasingly co-exists in the future, so focus on deriving valuable insights from data and the data culture is built to facilitate such initiatives.

Monday, October 5, 2020

Dataops - What is Data Ops...

 We live in a world of metaphors, there are new terms and metaphors which are heard everyday, with that it causes a lot of confusion, pressure and also some amount of chaos. It is important to filter out the noise and focus on what are needs of the business, customers/stakeholders. There are continuous attempts to streamline data projects, the reason being there is lot of unwanted costs, project delays and failed implementations. The whole purpose of data projects should be focused on value add for business or improving customer experiences and better integration of systems. In the Agile world, we have heard of Devops as a way to provide Continuous integration and Continuous deployment, similarly there emerged DataOps. What is DataOps: 

As defined by Dataops manifesto: https://www.dataopsmanifesto.org/:
Through firsthand experience working with data across organizations, tools, and industries we have uncovered a better way to develop and deliver analytics that we call DataOps. 
Very similar to agile manifesto, there are principles involved around DataOps. In order to facilitate Dataops there are tools available in the market today that try to tackle different aspects of DataOps. Some of the major areas in Dataops includes:

Data Quality - Very important, ability to perform simple to complex data quality checks at the time of ingestion of data. Data quality need to implemented as part of workflows where in the data engineer can track the records that were imported successfully and remediate records that failed.

Workflows - Ability to track data from sourcing to provisioning including the ability to profile, apply DQ Checks. Workflows need to be persisted.

Data Lineage - Ability to track how data points are connected from source systems all the way to provisioning systems.

Metadata Management - Categorizing all the different business, logical entities within a value chain and also have the ability to have a horizontal vision across the enterprise.

Data Insights - Based on the 3 aspects mentioned above, ability to generate valuable insights and provide business value for customers/stakeholders.

Self Service - Dataops also relies on building platforms where in different types of personas/users are able to handle their requests in a efficient manner.

Handle the 3 D's: They are Technical Debt, Data Debt and Brain Debt. I would like to thank Data Engineer/Cloud Consultant Bobby Allen for sharing this concept with me. Extremely important to handle this while taking up data projects.

Ability to build and dispose Environments - Data Projects rely heavily on data, the ability to build environments for data projects and quickly dismantle them for newer projects is the key.

It is very important to implement DataOps in terms of what is the value add for the business and how data will improve Customer Experience.

There tools that implement Dataops, some of the tools already in the market are: Atlan, Amazon Aethana.

Sunday, September 27, 2020

Data Discovery Tools

In today's world, data is the new asset or some day it is the new oil. Whether it is an asset or the new oil depends on how much of valid information/insights are determined from the data assets. In order to do a viable data project or if the data has to be useful to the business, it is extremely important to understand the data. This where data discovery comes in, in the past few years there been a significant developments in this domain. Earlier doing data discovery was lot of grunt work with very manual processes and updating metadata information was very time consuming.One of the data discovery product that i have been looking at and closely following is Atlan, i had briefly mentioned in my earlier blog, link is http://www.atlan.com. I signed up for a onboarding trial with Atlan and the whole process getting on boarded was very smooth, folks from Atlan guided me through this process. I was very excited to see what the product has to offer, given the pain points we have in our current process.

Once I logged in i was presented with a google like search interface and there are options for Discover, Glossary, Classification, Access on the left side of the home page. In the search bar, you type in the data asset that you want to search, one critical step here is that you have connected Atlan to a public cloud provider like Amazon, Azure, in my case it was connected to a Snowflake DB/Warehouse. when you click the search button, all the data assets related to the search term are pulled up. The first i noticed is that it provides a snapshot of row count and number of columns. 

When you click on the table, you are presented with a preview window with data, column information on the right, below that you have classification, with owner and SME information. Seeing all of these information in one window provides lot of efficiency, helps one start getting some context around the data. In the column list, there is also description for each column which can be edited and updated. As a analyst/Business user this feature is extremely useful. Above the data preview window, you are provided with Query/Lineage/Profile/Settings options. Each one of these have deeper functionality when you click on them. The interface flows very logically and is set up in such a way that all operations related to data discovery and analysis can be done in this tool. I will write a follow up blog post as i explore the lineage aspect of the tool much more.

One of the key aspects of a data project to ensure a solid foundation is to have a very good Metadata/Glossary of the data points. This would contain Business entities/Logical Entities and relationships along with lineage. In Atlan, this is accomplished by using the Glossary option that is available on the left pane of the dashboard. As part of the Glossary once can add Categories and Terms. The categories can be used for setting up Business Value Chains, Business/Logical Entities,Sourcing,API,Provisioning which in turn will provide context around the data. The terms will be useful for identifying individual data elements, also can be linked back to the actual tables/column. The link feature is also available for Categories. Atlan also provides a method to bulk load Glossary items based on a template that can be downloaded for Categories and Terms.

More coming as i dig deeper into some of use cases...Keep Learning, Keep Growing.

Monday, September 21, 2020

Snowflake - Data Loading Startegies

Snowflake is a key player in the cloud database offering space, along with Redshift which is a amazon offering. Interestingly Snowflake uses Amazon S3 for storage as part of the amazon cloud offering, while amazon continues to promote redshift. It is going to be interesting to see how this pans out the relationship between Amazon and SnowFlake, There is another competitor in the mix, which is the vendor Cloudera. More on this dynamics later, now let us move forward with data loading strategies in snowflake.

At a very high level, snowflake supports the following in terms of Location of the files:
1. Local Environment (files in a local folder) - In such instances the files are first move to a snowflake stage area and then loaded into a table in snowflake DB.
2. Amazon S3 - Files that are loaded from a user supplied S3 Bucket
3. Microsoft Azure - Flies are loaded from user defined Azure container.
4. Google Cloud Storage - Files loaded from user supplied cloud storage container

In addition to the above, the file formats that are supported are: CSV,JSON,AVRO,ORC,Parquet, XML is a preview feature at this point. There are different ways of data loading into snowflake, the method i would like to highlight in this blog post is the Bulk loading using COPY method.
The Bulk Load Using COPY method steps are a little different for each of the file locations mentioned above.

In the Situation where data has to be copied from a local file system, the data is first copied to a snow flake stage using the PUT command and then moved to a snowflake table. There are different types of Stage that are available in Snowflake. 1. User Stages, 2. Table Stages, 3. Internal Named Stages. User Stage is useful when the files are copied to multiple tables but accessed by a single user. The table stage is used when all the files are copied to a single table but used by multiple users. Internal Named Stage provided the maximum flexibility in terms of data loading. Based on privileges the data can be loaded into any table, this is recommended when doing regular data loads that involve multiple users and tables.

Once you decided on the type of the stage that is needed, then you create the stage, copy the files using the PUT command, and then use the COPY command to move the data into the snow flake table. The steps mentioned could vary slight based on the location of the files. For Amazon S3 storage you would use AWS tools to move the files to the stage area and then COPY into SnowFlake DB. For Google and Microsoft Azure use similar tools available in each cloud platform to move the files into the Stage area in Snowflake. For all the detailed information and support, please refer to the link below.

Loading data into snowflake db is the first step in exploring the features and the power of the cloud database offering, where once can test out the columnar database features.

Saturday, September 19, 2020

Online Transaction History - Database Design Strategies

In todays world of technology one of the common occurrence in financial services is the concept of omni Channel. The basic premise is that customers can access information related to their accounts(checking/savings/Credit/Debit/Mortgage) information through various channels such as:

1. Financial Centers
2. Online Banking
3. Mobile/Phone Applications
4. Statements related to accounts (Mailed)
5. SMS/Email (where applicable)

When information related to accounts is presented via different channels like above, it is critical/obvious to have the customer experience consistent. Now looking at the technologies that are utilized to solve the above problem/create such experiences, API's have made a tremendous amount of penetration. The API layer has succeeded in making the customer request from the client applications/Phone Apps very seamless. Now these API's have to have a very good response time, for example if i am looking at the balance of my account through a phone banking app, the results need to come back quickly. In case response times are slow it will lead to bad customer experience. It is very essential that the Data Services behind these API's are very efficient. This in turn translates to have a very good database design (The databases can be on perm or on the cloud). Lot of times when use the applications and go to financial centers we tend to take these response times for granted. Recently i had the opportunity to work on designing a solution for a online/mobile banking channel to display transaction/statement information.
The data was going to be accessed via calling API/Web services by the client applications. The data resided in a exadata oracle platform.

The information needed for providing transaction information was coming from a vendor which gets ingested into the exadata database. In order to provide the information to the client, a process had to be run on the production database to aggregate the transaction information. Now the challenge was when these processes are running, if a client tries to access his transaction information, how does one make sure there is no distortion or breaking of the service call. Information still needed to be provided to the customer and there cannot be a time lag. In order to achieve this we had 2 options:

1. Perform a SYNONYM SWAP as part of the Oracle Procedure that is aggregating the information. Basically in this scenario, see example below, available in link: https://dba.stackexchange.com/questions/177959/how-do-i-swap-tables-atomically-in-oracle
We went with this option, the data was reloaded everyday, but we started to service call failures only at the time when the synonym swap happened.
2. We used this option, Perform delta processing of records every day and merge the changes into main table, use batch sizes during the final merge so that records are ingested into the main table in small chunks and that should minimize any contention of resources. In this option we processed only changed/new records and we did not perform any synonym swap. In this option, though it took a little longer for the job to run complete, there was no distortion of the service and the sla was well within what the customer expected. In order to get the accounts that have changed, we used a table to maintain the tables that are involved in the processing and capture the accounts that have changed in those tables.

These were couple of options we experimented with and we went with Option 2. It is very critical to design your database according to the expectations of the online/mobile applications. We experimented with multiple options and we narrowed down to the 2 options mentioned above.
In case you happen to read this post in my blog and you have any other suggestions, please leave a comment and i will definitely look into it.

Monday, September 14, 2020

Snowflake - Cloud Database/Datawarehouse

With the advent of public clod like AWS, Google Cloud, Azure and the adoption of these public cloud services by various businesses, companies and organizations, one of the main talking points is how data can be stored in the cloud, security concerns, architecture. These are all the topics that are of main interest when storing data in the cloud. In certain organizations the move to cloud has been very quick, in certain sectors the adoption has been pretty slow primarily due to security concerns. Now these challenges are being overcome steadily. In terms of data services, one of the cloud platforms that is very popular for the last few years and also getting ready to go for IPO is SnowFlake. The link for the company is www.snowflake.com. Snowflake is a global platform for all your data services, data lakes and data science applications. Snowflake is not a relational database but supports SQL basic operations, DDL,DML, UDF,Stored Procedures. Snowflake uses Amazon S3 and now Azure as the public cloud platform for providing the data services over the cloud. Snowflkes architecture in terms of the database is that it uses columnar storage to enable faster processing of queries. Data is loaded into Amazon S3 through files into user areas and then is moved into the snowflake schema/ databases for enablement of queries. Please refer to the snowflake company website for additional information on architecture, blogs and other kits that are available for one to check out all the features. Snowflake takes advantage of the Amazon S3 storage power and uses its own columnar and other data warehouses related features for computational purposes. One can also refer to youtube for additional details on snowflake architecture. Here is a link: https://www.youtube.com/watch?v=dxrEHqMFUWI&t=14s that cane be used for snowflake architecture.

Thursday, September 10, 2020

 AI, Machine Learning, Data Governance

Artificial Intelligence, machine Learning hav continued to penetrate all walks of life and technology has undergone tremendous amount of changes. It is being said that Data is the new oil which actually has propelled AI and ML to greater heights. In order to use AI and ML more effectively in the business today, it is imperative that all the stakeholders, consumers and technologists understand the importance of data. There should be very good collaboration between all the parties involved to make good use of data and take it forward to use AI and ML effectively. For data to be used effectively in an organization, we need proper guardrails to source the data, clean the data, remove unwanted data, store and provision data to various users. Here is where data governance comes in, there has to be a enterprise wide appreciation for having such process and standard. It should come off as process heavy or bureaucratic but something that is efficient and at the same able to manage data effectively. As organizations grow, there is going to be a vertical and horizontal implementation of data governance and both of them need to be in sync. This in turn is very essential for AI and ML efforts because it will make the outcomes more meaningful to the organization. In addition better contexts would be defined which will make the AI and ML projects more viable and reduce inefficiencies and provide cost benefits.

One of the important step in achieving the above mentioned steps is to have very data cataloguing measures , persist all the logical, business entities, lineage of all the data being sourced to be all in place. The data also need to be classified as NPI or non NPI depending on the business context. In today's world majority of the work mentioned above is manual and a lot of time is spent in trying to get SME inputs and approval. This causes time delays and project cost increase, this can be alleviated by using data discovery tools that are available today. The are quite a few tools available but the one i have  started to look more into the capabilities is the tool from Atlan: https://atlan.com/. atlan provides an excellent platform for performing Data Discovery, Lineage, Profiling, Governance and exploration. In what i have seen with the tool and the demo provided to me, the whole data life cycle has been very nicely captured.The user interface is very intuitive and the tool also helps the user navigate through the different screens without any technical inputs needed. The search is very google like in terms of looking up the different data assets that are available. I will be doing some more use cases and deep dive into the tool in the next couple of weeks and will provide more updates.