Monday, April 17, 2017

Blueprint - Continous Requirements

Currently agile methodologies for software development are seeing an increased adoption in a variety of  business and organizations. one of the challenges bigger companies is face is how quickly they can turn around products for customers so that they can gain competitive advantage. One of the selling points for agile is that you need to have minimal documentation. As organizations mature through the agile process they are also finding that there is a lack of lineage on what was asked or needed as part of features. So there is a compelling need to track requirements and manage them effectively through the agile process. This is where a tool like Blueprint comes in, this tool is being widely used by some of the top 500 fortune companies.
Blueprint provides a very good mechanism to manage requirements and track requirements in a continuous fashion. The tool provides a very easy to use interface where one can maintain folder structures for a product.  The product folder is broken down into the following categories:
1. There is a folder to maintain current state of the product. This would contain the various components of a product, there can be broken down into different sub folders. Within each sub-folder you can maintain what blueprint calls artifacts. Artifacts could be a word doc, visio diagram, user inter face mock ups, Used cases and Test scripts/
2. There is a folder called Product management which is used for current requirements being worked on for the product.
3. There is a folder called Enterprise where is artifacts related to standards, compliance, regulations can be maintained.
4. There is a folder called archive, where in artifacts can be archived ans stored.
Blueprint requirements software link:
http://www.blueprintsys.com/

Blueprint also provides integration with other applications. The most important with respect to agile is the integration with a tool called  Rally. The one artifact inn Blueprint can be used for integration with Rally is called Process Story Teller. When Process story teller artifact is added in Blueprint, it provides an interface into a tool called Blueprint Storyteller. In Blueprint Storyteller, one can create process flow with decision points. Once the process flow is completed, these can be published into Blueprint and later be used for Integration with Rally.
Blueprint Storyteller:
http://www.blueprintsys.com/storyteller/ 

Please see image below for how the storyteller interface looks...



Rally Software Link:
https://rally1.rallydev.com/slm/login.op

Friday, February 24, 2017

Alteryx - Self-Service Data Analytics

There has been a lot of debate about ETL, how does it stack up with recent advancement in Big data technologies. One of the question being discussed how data can be ingested and how quickly can it be made relevant to the business. For many analysts in line of business  the process to blend and analyze data is slow and painful. It requires different tools and people to gather, cleanse data from different sources, lot of tools to build and publish analytic models. One of the products that handles this challenge is Alteryx. The product offers different capabilities: Prepare & Blend, Predictive Analytics, Spatial Analytics and Publish Insights.
Here is the link for one of their products:
http://www.alteryx.com/products/alteryx-designer

Quoting Alteryx: Alteryx Designer solves this by delivering a repeatable workflow for self-service data analytics that leads to deeper insights in hours, not the weeks typical of traditional approaches.
Please refer to the link to gain more insights into the Product Capabilities.

Sunday, December 25, 2016

2017 - Analytics Trends...

As we wind down year 2016 and move into 2017, there is lot of anticipation as to what will be the analytics like in 2017. 2016 saw a lot of development in the area of analytics with increasing range of applications in different industries/business. Big data in 2016 made significant inroads in 2016 and hope to see it solidify in 2017 with more number of business adopting such technologies. There was also lot of constructive debate around Big data analytics in 2016. There are more companies introducing machine learning technologies, one of the most prominent being Echo, Alexa from Amazon. There are other companies which are introducing machine learning capabilities in their applications.
Now i am providing a link which takes a lot what are the expected trends in 2017. One of the expected trend in 2017 especially for Developers in the BI/Database arena:
Traditional programmers will be required to gain data science skills in order to stay relevant, employable, and effective in their careers.
http://www.ibmbigdatahub.com/blog/big-data-and-analytics-trends-2017-james-kobielus-s-predictions.

Wishing every one a very happy and prosperous new year 2017. Hope to write more articles in year 2017.

Thank you

Monday, October 24, 2016

Data Wrangling...Trifacta

Today companies/business deal with large volumes of data and are constantly trying to find out value with their business data. Lot of combination of technologies are used to mine data and use them for predictive/perspective analytics. One of the key steps involved in producing a valid/useful analytical model is data preparation/cleansing.  Data sometimes comes from multiple source, it needs to be properly merges so that meaningful analysis can be made.  One of the products available in the market today for Data Wrangling is Trifacta, here is the link for the product: https://www.trifacta.com/products/. As per trifacta: "Successful analysis relies upon accurate, well-structured data that has been formatted for the specific needs of the task at hand. Data wrangling is the process you must undergo to transition raw data source inputs into prepared data set outputs to be utilized in your analysis". For detailed description on the Trifacta wrangler product: please  https://www.trifacta.com/products/wrangler/.

Monday, September 19, 2016

SQL Saturday - Analytics/Machine Learning/R...

I had the opportunity to attend the SQL Saturday Event in Charlotte, NC on September 17, 2016. The event was very organized and hosted by a hard working/talented team of volunteers. There were a variety of topics spread across different aspects of SQL Server, Business Intelligence and Data Analytics. There were 3 sessions which I found very informative and Interesting. The 3 Topics were across Data Analytics, Machine Learning and Master Data Management (Microsoft).
The first topic which i attended was from SQL Server/Data Analytics Expert Jen Underwood (Please visit her excellent blog http://www.jenunderwood.com/ for more information/trends in Analytics), the topic was on trends in the Analytics Industry. The topic covered  the skill sets/domains currently hot/growing in the Data Analytics/Big Data space. There were interesting demos on how certain jobs can be automated,also how robots are beginning to permeate different as aspects of human life and how they are helping out in areas of customer service.here is a link to robot video which has human like expressions and interactions:
Robotics. The interesting aspect of this demo is that rather being just machine like, the robot interacts in a very human like fashion. These type of robots could replace jobs that can be easily automated. There were other aspects covered about the cloud, Machine Learning and Predictive Analytics. One of the other interesting area that was mentioned was the area of immersive data visualization. This is where the concept of 3-D visualizations can be used to analyze and understand data. One of the visualizations that was shown was the stock market rise and fall during the past several years, also it showed the crash of the stock market in 2008 in a roller coaster ride simulation. Here is the link for the demo: http://graphics.wsj.com/3d-nasdaq/.This is a  virtual reality guided tour of 21 years of the Nasdaq, very interesting concept. One of the thoughts that went through my mind was that how much of such visualizations would work in certain types of business/organizations. On the whole the topic was very informative with respect to what is coming in the Data Analytics space and how one needs to be prepared.
The second session was on Master Data Management by Rushabh Mehta (SQL Server Expert, Past SQL Pass President, Independent Consultant/CEO). This topic was a presentation on the very important but often ignored topic of data management. In this presentation Rishabh went through why Master Data management is important and discussed one of the projects he did for his client. In this project he explained the process of data Cleansing, how records could de-duplicated and usage of standardized services from Melissa Data Services. Melissa Data Services provides services around Address, Name Standardization, these are very useful when one tries to created a master record for a customer. Here is the link for Melissa and the services they offer: http://www.melissadata.com/. The session also provided insights into how a master record could be created for companies, here services offered by Dun and Bradstreet were used. Overall session was very informative and conveyed the importance of Master Data Management.
The third session which i found very useful was the Session on Machine Learning with R by Paco Gonzalez,Corporate Center of Excellence, Program Manager at SolidQ. The session was very informative and very nicely presented. Paco Touched upon how tweets from twitter can be imported and analyzed to determine the sentiment of a particular topic/product being discussed. He took an example of product that was being sold in a online cloth retailer website and how tweets regarding this particular product can be scanned to understand whether folks are talking good or bad about the product. He mentioned that one would get the feeds from twitter and also request Twitter for data relating to particular has tags. Paco also presented case studies on how Machine learning can be used to determine if a particular would be with a bank or leave the bank. He demonstrated how past patterns can be used to train a model and use test data to determine the accuracy of the bank model. The R integration with SQL Server 2016 seems to be very interesting and exciting, now one has the power of getting predictive results by executing stored procedures.
There was demo of the Microsoft Cognitive Services that can be used for analysis of text, face and emotions:
here is the link: https://www.microsoft.com/cognitive-services.
Overall a very exciting SQL Saturday and a very good information gathering session.

Friday, September 16, 2016

Data Virtualization-Data Conditioning/Masking

These days business are expected to agile and have to deliver solutions quickly and efficiently. This means that while developing products it has to be moved through the different environments efficiently and quickly. There is also a lot of dependency on data in the test and lower level environments. The quality of data in the test environments need to good so that the applications using the data can be tested effectively. Often organizations run into challenges while populating data in Lower level environments either due to space issues and just the time taken to condition data takes a very long time, this thereby affects product delivery. This where products that specialize data virtualization come in. Delphix is one such product which enables organizations to effectively get production type data in test environments.  Here is the website for the product:https://www.delphix.com/. According to the website: "Speed wins. It’s a simple fact. The faster you can deliver new applications, features and upgrades to market, the better your business performs. For that you need faster data. And for faster data, you need Delphix.".  Please refer to the link below which explains the need for having such a product. https://www.delphix.com/why-delphix. As the nature of application development keeps changing, the quality of data needed for testing and other pre-production activities becomes very important and essential.

Tuesday, August 2, 2016

Data Science - Education

With the rapid growth of Big data technologies , there has been an exponential growth of data science and its related technologies. This has has also led to the demand for data scientists and also the jobs related to data science are very lucrative. Microsoft has been steadily expanding it s cloud based offerings and also getting into big Data related technologies and efforts. Since there is tremendous need for Data science skills, Microsoft has come forward to offer a curriculum  totally devoted to Data Science. This curriculum is offered via edx.org.  There are a total of 9 courses and the price per course ranges from $49-$99. There is also a final project for which around 6-10 hours is required. One can check the link below for all the details:
https://www.edx.org/microsoft-data-science-curriculum
The courses cover from Use Microsoft Excel to explore data to Iimplement a machine learning solution for a given data problem. Each of the course can be done a auditing course or one can upgrade to get a validity certificate on passing the course. Each of the courses have Labs ,Quizzes and discussion forums, the discussion forums can be use to get questions answered related to the concepts being discussed. I hope the courses provide the much needed insights into Data Science.