Sunday, October 25, 2020

Unlocking Insights-Data Quality

 One of the main buzzword that we constantly hear is about insights or unlocking insights from data. This has been one of the main selling aspect when it comes to selling technology to the business. The sophistication of tools is a welcome feature to unearth the insights, at the same time what are the critical components in order to get meaningful insights? One of the fundamental requirement in order to get meaningful insights, is to what have a solid data pipeline end to end. In order to have this there needs to be the following in place:

1. Data Governance/Lineage.
2. Metadata Quality/Entities.
3. Valid Domain Value Chain.
4.  Customer Data (Profile/Accounts/Interactions via different Channels)
5. Data Quality including Workflow/Validation/Data Test Beds/Deployment.
6. Track Data Assets Related to a domain.
7. Business Data Owner - A Person or a group of people who can help identifying Business       purpose/meaning of all the data points in the Domain.
8. Ability to Handle Technical Debt - How to Systematically handle technical debt. A very common scenario in organizations grown by Merger and Acquisitions.
9. Scale, Share and Speed - Does the Architecture, Infrastructure available can handle the frequency/speed of data requests by business.

The elements mentioned above are very important, a good interplay of the above elements are needed in order to generate valid insights. For insights there are 2 main components
1. Insight Rules - Rules which are executed when certain events happen and certain business conditions are met.
2. Insight Triggers - Capture data points when certain events happen, for example there was a credit card transaction made at lowes or Home Depot, or someone paid a SAT entrance exam fee or a there was mobile deposit made. As part of this process there is also selection criteria around how are the transactions picked, also includes whether the insights are going to be triggered daily, weekly or a monthly basis.

The combination of the above 2 components can help generate insights, now assuming that the 8 elements mentioned above are satisfied or they are in place. It would be also advisable to categorize the Insights based on the Domain so that it would be easier to track and maintain the insights. There is constant mining of data that is being done, in order to generate accurate insights.
AI and ML are used very heavily when generating insights. The effectiveness of AI, ML becomes more apparent if the underlying data infrastructure is really solid.
The purpose of this blog post is to explain highlight the importance of solid data foundations needed to generate valuable insights for business and customers.



Wednesday, October 21, 2020

AI in Mortgage

 AI has been permeating different aspects of life, Business and Technology, there are more sophisticated implementations of AI seeing the light of the day. There have been gains made with AI in terms of value added proposition with different types of Business. One of the areas where there has been lot of discussions and debates about the use of AI has been in the field of  Mortgage. There have been lot of automated tools, chatbots, Quickens Rocket Mortgage and companies have been trying to implement their own versions of digital experience in the Mortgage Space.One of the challenges in Mortgage is that the processes still are complex, there are still traditional methods that are being adopted and a lot of dependencies given the wide nature of information that is needed for Mortgage. There are 3 components that need to come together in order to implement AI in Mortgage, they are People, Process and Technology. In Mortgage processes, when you apply for a loan or refinance a loan, usually there lot of documents that are needed. The processes for handling these have been sluggish to pretty decent, it does take a quite bit of time. Apps Like Rocket Mortgage and other bank offerings do seem to alleviate some of the pain points with respect to this process. The other aspect that been utilized to improve process efficiency is to move to the cloud platforms hopefully to streamline the data available from different data sources.

There are couple of ways to handle AI methods in Mortgage Space, one is to develop inhouse methods to use AI and ML techniques to automate mortgage process. The other option is to use any API available in a API marketplace and enhance the process. Given the recent developments in AI, Google has come up with a API called Lending DocAI,is meant to help mortgage companies speed up the process of evaluating a borrower’s income and asset documents, using specialized machine learning models to automate routine document reviews, it is mentioned here:https://techcrunch.com/2020/10/19/google-cloud-launches-lending-docai-its-first-dedicated-mortgage-industry-tool/. More details on the API is mentioned here:https://cloud.google.com/solutions/lending-doc-ai. It is good to see companies like google are coming up with industry specific API offerings which can help improve efficiencies. Expecting to see more on the same lines from other tech companies to solve business problems.

Friday, October 16, 2020

Workflow, Data Masking - Data Ops

 Dataops is becoming more prevalent in today's data driven projects, due to the speed at which these projects need to be executed and also be meaningful at the same time. There are tools in the Dataops space that are provide lot of different features, companies like Atlan, Zaloni are very popular in this space, in fact Atlan was named in the Gartner 2020 Data Ops Vendors list. Now coming to the different features needed in these tools, there are concepts that are becoming very important, those are Data Masking and Workflows. It is very well know that in Data Driven Projects testing with valid subsets of data becomes very important. One of the biggest challenges faced today in Data Projects is the availability of test data at the right time in order to test functionality, usually it takes a process to get test beds ready.

With Dataops tools, one of the features that is promised is Data Masking/Obfuscation which means the production data could be obfuscated and be made available quickly for testing. In the data masking process there is this concept of identifying data elements that are categorized as NPI or Confidential and obfuscating those elements. Dataops tools provide mechanism where masking can be done very quickly, this really helps the process of testing in test environments. The impacts become more visible when one is working on major projects where testing has to be done through multiple cycles and also if one is in a agile environment. One of the leading Data Analytics expert Sol Rashidi mentions about 3 S's - Speed, Scale and Shareability, these are what is expected from Data projects apart from providing Business Value. In order to Satisfy the above requirements, Data masking being made available in data Ops tools is very welcoming indeed.

The other concept i wanted to discuss here is the concept of Workflows in Dataops. When we look at the data flow in general, there are source systems, data is collected into a HUB/Datawarehouse and then data is provisioned out to different applications/consumers. In order to achieve this typically lot of time is spent in developing ETL flows, moving data into different databases and curate the data to be provisioned. This involves a lot of time, cost and infrastructure. In order to alleviate these challenges, Dataops tools today introduce a concept called Workflows.  The main concept here is to automate the flow of data from source to target, in addition to that also execute data quality rules, profile the data and prepare the data for consumption to various systems. Workflows do emphasize the importance of data quality checks which are much more than data validations, these can be customized to verify the type of data that need be to be present with each data attribute. When performing data quality checks in the workflow, the tools also provide the ability to set up Custom DQ Rule and provides Alerts which can be sent to teams who provide the data. There are a couple of vendors who offer the Workflow functionality, they are Zaloni Arena Product and Atlan has it in their Trail offering, hope to be in production soon. Working with quality is fundamental for any Data project, building a good framework with dataops tools provides the necessary Governance and Guardrails. Such concepts will go a long way in setting up quality data platforms which are very essential for AI and Machine Learning Initiatives.

Vendor Links:

www.atlan.com

www.zaloni.com


Tuesday, October 13, 2020

Data Driven Culture/Product Management

 There are 2 topics i see discussed heavily today in my connections/network or summits/round tables, they are about implementing a data driven culture, how to generate valuable insights using the data, applying AI, Machine Learning. The other aspect being Product Management, there are lot of sessions/talks about this topic, also lot of people wanting to becoming Product Managers. In a sense it seems like Data Analytics, Data Scientists and Product Manager are very glamorous titles to have. They are very responsible positions and care needs to be taken to make sure that one develops the needed skills for the above jobs. I would like to dwell a little further into these positions.

Data Driven Culture is more easily said than done, it requires a combination top/down and a bottom up approach as well. There has to be a complete embracement of the ideology by the leadership/business and technology. Everyone needs to have the understanding of what needs to be done with the data, the end state of data projects and most importantly the willingness to collaborate. Such a culture would enable better architecting of the infrastructure, good data governance/management, ability to choose the right infrastructure and platform. The focus needs to be on the value add rather just simple cost cutting, there are going to be times where certain transitions could cost money but for a eventual payoff later.This also brings up the point. of ability to using AI rin a responsible manner.

Since there is a lot of emphasis on data, it also feeds into the aspect of Product Management. Data can be used very effectively to build products, get feedback on products. Data can be a strong asset to improve customer experience and also provided value add behind the features. The type of data being represented in the product or being used to build products indicates the importance of data. Data can help with quantifiable measures, which can help in gauging how well the product is doing. There are different ways of getting feedback like user surveys, hackathons combined with interviews which can be very useful for Product Management, Having/being aware of such techniques help in grooming oneself about product management. It is a very important role which is at the intersection of Business/Customers and Stakeholders.

Product Management and Data Ops/Data Driven Culture will increasingly co-exists in the future, so focus on deriving valuable insights from data and the data culture is built to facilitate such initiatives.

Monday, October 5, 2020

Dataops - What is Data Ops...

 We live in a world of metaphors, there are new terms and metaphors which are heard everyday, with that it causes a lot of confusion, pressure and also some amount of chaos. It is important to filter out the noise and focus on what are needs of the business, customers/stakeholders. There are continuous attempts to streamline data projects, the reason being there is lot of unwanted costs, project delays and failed implementations. The whole purpose of data projects should be focused on value add for business or improving customer experiences and better integration of systems. In the Agile world, we have heard of Devops as a way to provide Continuous integration and Continuous deployment, similarly there emerged DataOps. What is DataOps: 

As defined by Dataops manifesto: https://www.dataopsmanifesto.org/:
Through firsthand experience working with data across organizations, tools, and industries we have uncovered a better way to develop and deliver analytics that we call DataOps. 
Very similar to agile manifesto, there are principles involved around DataOps. In order to facilitate Dataops there are tools available in the market today that try to tackle different aspects of DataOps. Some of the major areas in Dataops includes:

Data Quality - Very important, ability to perform simple to complex data quality checks at the time of ingestion of data. Data quality need to implemented as part of workflows where in the data engineer can track the records that were imported successfully and remediate records that failed.

Workflows - Ability to track data from sourcing to provisioning including the ability to profile, apply DQ Checks. Workflows need to be persisted.

Data Lineage - Ability to track how data points are connected from source systems all the way to provisioning systems.

Metadata Management - Categorizing all the different business, logical entities within a value chain and also have the ability to have a horizontal vision across the enterprise.

Data Insights - Based on the 3 aspects mentioned above, ability to generate valuable insights and provide business value for customers/stakeholders.

Self Service - Dataops also relies on building platforms where in different types of personas/users are able to handle their requests in a efficient manner.

Handle the 3 D's: They are Technical Debt, Data Debt and Brain Debt. I would like to thank Data Engineer/Cloud Consultant Bobby Allen for sharing this concept with me. Extremely important to handle this while taking up data projects.

Ability to build and dispose Environments - Data Projects rely heavily on data, the ability to build environments for data projects and quickly dismantle them for newer projects is the key.

It is very important to implement DataOps in terms of what is the value add for the business and how data will improve Customer Experience.

There tools that implement Dataops, some of the tools already in the market are: Atlan, Amazon Aethana.


Sunday, September 27, 2020

Data Discovery Tools

In today's world, data is the new asset or some day it is the new oil. Whether it is an asset or the new oil depends on how much of valid information/insights are determined from the data assets. In order to do a viable data project or if the data has to be useful to the business, it is extremely important to understand the data. This where data discovery comes in, in the past few years there been a significant developments in this domain. Earlier doing data discovery was lot of grunt work with very manual processes and updating metadata information was very time consuming.One of the data discovery product that i have been looking at and closely following is Atlan, i had briefly mentioned in my earlier blog, link is http://www.atlan.com. I signed up for a onboarding trial with Atlan and the whole process getting on boarded was very smooth, folks from Atlan guided me through this process. I was very excited to see what the product has to offer, given the pain points we have in our current process.

Once I logged in i was presented with a google like search interface and there are options for Discover, Glossary, Classification, Access on the left side of the home page. In the search bar, you type in the data asset that you want to search, one critical step here is that you have connected Atlan to a public cloud provider like Amazon, Azure, in my case it was connected to a Snowflake DB/Warehouse. when you click the search button, all the data assets related to the search term are pulled up. The first i noticed is that it provides a snapshot of row count and number of columns. 

When you click on the table, you are presented with a preview window with data, column information on the right, below that you have classification, with owner and SME information. Seeing all of these information in one window provides lot of efficiency, helps one start getting some context around the data. In the column list, there is also description for each column which can be edited and updated. As a analyst/Business user this feature is extremely useful. Above the data preview window, you are provided with Query/Lineage/Profile/Settings options. Each one of these have deeper functionality when you click on them. The interface flows very logically and is set up in such a way that all operations related to data discovery and analysis can be done in this tool. I will write a follow up blog post as i explore the lineage aspect of the tool much more.

One of the key aspects of a data project to ensure a solid foundation is to have a very good Metadata/Glossary of the data points. This would contain Business entities/Logical Entities and relationships along with lineage. In Atlan, this is accomplished by using the Glossary option that is available on the left pane of the dashboard. As part of the Glossary once can add Categories and Terms. The categories can be used for setting up Business Value Chains, Business/Logical Entities,Sourcing,API,Provisioning which in turn will provide context around the data. The terms will be useful for identifying individual data elements, also can be linked back to the actual tables/column. The link feature is also available for Categories. Atlan also provides a method to bulk load Glossary items based on a template that can be downloaded for Categories and Terms.

More coming as i dig deeper into some of use cases...Keep Learning, Keep Growing.

Monday, September 21, 2020

Snowflake - Data Loading Startegies

Snowflake is a key player in the cloud database offering space, along with Redshift which is a amazon offering. Interestingly Snowflake uses Amazon S3 for storage as part of the amazon cloud offering, while amazon continues to promote redshift. It is going to be interesting to see how this pans out the relationship between Amazon and SnowFlake, There is another competitor in the mix, which is the vendor Cloudera. More on this dynamics later, now let us move forward with data loading strategies in snowflake.

At a very high level, snowflake supports the following in terms of Location of the files:
1. Local Environment (files in a local folder) - In such instances the files are first move to a snowflake stage area and then loaded into a table in snowflake DB.
2. Amazon S3 - Files that are loaded from a user supplied S3 Bucket
3. Microsoft Azure - Flies are loaded from user defined Azure container.
4. Google Cloud Storage - Files loaded from user supplied cloud storage container

In addition to the above, the file formats that are supported are: CSV,JSON,AVRO,ORC,Parquet, XML is a preview feature at this point. There are different ways of data loading into snowflake, the method i would like to highlight in this blog post is the Bulk loading using COPY method.
The Bulk Load Using COPY method steps are a little different for each of the file locations mentioned above.

In the Situation where data has to be copied from a local file system, the data is first copied to a snow flake stage using the PUT command and then moved to a snowflake table. There are different types of Stage that are available in Snowflake. 1. User Stages, 2. Table Stages, 3. Internal Named Stages. User Stage is useful when the files are copied to multiple tables but accessed by a single user. The table stage is used when all the files are copied to a single table but used by multiple users. Internal Named Stage provided the maximum flexibility in terms of data loading. Based on privileges the data can be loaded into any table, this is recommended when doing regular data loads that involve multiple users and tables.

Once you decided on the type of the stage that is needed, then you create the stage, copy the files using the PUT command, and then use the COPY command to move the data into the snow flake table. The steps mentioned could vary slight based on the location of the files. For Amazon S3 storage you would use AWS tools to move the files to the stage area and then COPY into SnowFlake DB. For Google and Microsoft Azure use similar tools available in each cloud platform to move the files into the Stage area in Snowflake. For all the detailed information and support, please refer to the link below.


Loading data into snowflake db is the first step in exploring the features and the power of the cloud database offering, where once can test out the columnar database features.

Saturday, September 19, 2020

Online Transaction History - Database Design Strategies

In todays world of technology one of the common occurrence in financial services is the concept of omni Channel. The basic premise is that customers can access information related to their accounts(checking/savings/Credit/Debit/Mortgage) information through various channels such as:

1. Financial Centers
2. Online Banking
3. Mobile/Phone Applications
4. Statements related to accounts (Mailed)
5. SMS/Email (where applicable)

When information related to accounts is presented via different channels like above, it is critical/obvious to have the customer experience consistent. Now looking at the technologies that are utilized to solve the above problem/create such experiences, API's have made a tremendous amount of penetration. The API layer has succeeded in making the customer request from the client applications/Phone Apps very seamless. Now these API's have to have a very good response time, for example if i am looking at the balance of my account through a phone banking app, the results need to come back quickly. In case response times are slow it will lead to bad customer experience. It is very essential that the Data Services behind these API's are very efficient. This in turn translates to have a very good database design (The databases can be on perm or on the cloud). Lot of times when use the applications and go to financial centers we tend to take these response times for granted. Recently i had the opportunity to work on designing a solution for a online/mobile banking channel to display transaction/statement information.
The data was going to be accessed via calling API/Web services by the client applications. The data resided in a exadata oracle platform.

The information needed for providing transaction information was coming from a vendor which gets ingested into the exadata database. In order to provide the information to the client, a process had to be run on the production database to aggregate the transaction information. Now the challenge was when these processes are running, if a client tries to access his transaction information, how does one make sure there is no distortion or breaking of the service call. Information still needed to be provided to the customer and there cannot be a time lag. In order to achieve this we had 2 options:

1. Perform a SYNONYM SWAP as part of the Oracle Procedure that is aggregating the information. Basically in this scenario, see example below, available in link: https://dba.stackexchange.com/questions/177959/how-do-i-swap-tables-atomically-in-oracle
We went with this option, the data was reloaded everyday, but we started to service call failures only at the time when the synonym swap happened.
2. We used this option, Perform delta processing of records every day and merge the changes into main table, use batch sizes during the final merge so that records are ingested into the main table in small chunks and that should minimize any contention of resources. In this option we processed only changed/new records and we did not perform any synonym swap. In this option, though it took a little longer for the job to run complete, there was no distortion of the service and the sla was well within what the customer expected. In order to get the accounts that have changed, we used a table to maintain the tables that are involved in the processing and capture the accounts that have changed in those tables.

These were couple of options we experimented with and we went with Option 2. It is very critical to design your database according to the expectations of the online/mobile applications. We experimented with multiple options and we narrowed down to the 2 options mentioned above.
In case you happen to read this post in my blog and you have any other suggestions, please leave a comment and i will definitely look into it.


Monday, September 14, 2020

Snowflake - Cloud Database/Datawarehouse

With the advent of public clod like AWS, Google Cloud, Azure and the adoption of these public cloud services by various businesses, companies and organizations, one of the main talking points is how data can be stored in the cloud, security concerns, architecture. These are all the topics that are of main interest when storing data in the cloud. In certain organizations the move to cloud has been very quick, in certain sectors the adoption has been pretty slow primarily due to security concerns. Now these challenges are being overcome steadily. In terms of data services, one of the cloud platforms that is very popular for the last few years and also getting ready to go for IPO is SnowFlake. The link for the company is www.snowflake.com. Snowflake is a global platform for all your data services, data lakes and data science applications. Snowflake is not a relational database but supports SQL basic operations, DDL,DML, UDF,Stored Procedures. Snowflake uses Amazon S3 and now Azure as the public cloud platform for providing the data services over the cloud. Snowflkes architecture in terms of the database is that it uses columnar storage to enable faster processing of queries. Data is loaded into Amazon S3 through files into user areas and then is moved into the snowflake schema/ databases for enablement of queries. Please refer to the snowflake company website for additional information on architecture, blogs and other kits that are available for one to check out all the features. Snowflake takes advantage of the Amazon S3 storage power and uses its own columnar and other data warehouses related features for computational purposes. One can also refer to youtube for additional details on snowflake architecture. Here is a link: https://www.youtube.com/watch?v=dxrEHqMFUWI&t=14s that cane be used for snowflake architecture.

Thursday, September 10, 2020

 AI, Machine Learning, Data Governance

Artificial Intelligence, machine Learning hav continued to penetrate all walks of life and technology has undergone tremendous amount of changes. It is being said that Data is the new oil which actually has propelled AI and ML to greater heights. In order to use AI and ML more effectively in the business today, it is imperative that all the stakeholders, consumers and technologists understand the importance of data. There should be very good collaboration between all the parties involved to make good use of data and take it forward to use AI and ML effectively. For data to be used effectively in an organization, we need proper guardrails to source the data, clean the data, remove unwanted data, store and provision data to various users. Here is where data governance comes in, there has to be a enterprise wide appreciation for having such process and standard. It should come off as process heavy or bureaucratic but something that is efficient and at the same able to manage data effectively. As organizations grow, there is going to be a vertical and horizontal implementation of data governance and both of them need to be in sync. This in turn is very essential for AI and ML efforts because it will make the outcomes more meaningful to the organization. In addition better contexts would be defined which will make the AI and ML projects more viable and reduce inefficiencies and provide cost benefits.

One of the important step in achieving the above mentioned steps is to have very data cataloguing measures , persist all the logical, business entities, lineage of all the data being sourced to be all in place. The data also need to be classified as NPI or non NPI depending on the business context. In today's world majority of the work mentioned above is manual and a lot of time is spent in trying to get SME inputs and approval. This causes time delays and project cost increase, this can be alleviated by using data discovery tools that are available today. The are quite a few tools available but the one i have  started to look more into the capabilities is the tool from Atlan: https://atlan.com/. atlan provides an excellent platform for performing Data Discovery, Lineage, Profiling, Governance and exploration. In what i have seen with the tool and the demo provided to me, the whole data life cycle has been very nicely captured.The user interface is very intuitive and the tool also helps the user navigate through the different screens without any technical inputs needed. The search is very google like in terms of looking up the different data assets that are available. I will be doing some more use cases and deep dive into the tool in the next couple of weeks and will provide more updates.