The SQL Saturday BI edition that
was held in Charlotte on October 4, 2014 had some interesting topics which I
attended. They were really insightful and provided me with some good
information that I would like to share. The first session by was given by Kevin Goode on topic of Introduction to Hive,
the session was really interesting with lots of questions and Answers. HIVE
basically is a layer on top of hadoop that allows users to write SQL language
type queries. HIVE in turn take the SQL queries and converts them to map reduce
jobs, they get executed the hadoop layer. The results are then provided back to
the user. HIVE uses a language called HQL which is a sqlvariant and has lots of
similarities like MySQL and Oracle Syntax. The query construct of HQL has
commands which very similar to Transact SQL. One of the main things that need
to keep in my mind is that HQL/HIVE does not great optimizer like sql server,
so while constructing the join statements in HQL one has to be very careful
about placing the tables in the join. HIVE?HQL is best suited for anaylytical queries and also it has no updates and deletes. The rule of thumb for now seems to be
that the smaller tables come first and then the larger tables. HQL/HIVE also
provided some type of execution plan but these are not as sophisticated as the
ones you see in SQL Server. The error messages provided by HQL when there
are issues are more java based, so it could a little bit of getting used while
working with HQL error messages. While working with HIVE/HQL the developer
should be aware of the type of data coming in and how it is partitioned. The
organization of data really helps optimizing the HQL queries. The results
provided by HQL can be used to feed a traditional datawrehouse system.One of
the areas where Hadoop is weak is in the area of security. Security is not very
structured in hadoop, the security has to be achieved using other bolt on
tools. Hortenworks is one of the leading distributors of hadoop and they
provide information regarding HQL. The frontend provided to connect to HIVE is
completely web based and users can use this tool to write HIVE queries using
HQL. Hortenworks is one of the big players in hadoop distributions: please
visit the hortenworks website here.
The
second session which attended was on tabular models. The topic was about how
build a tabular model using visual studio 2012 with SSAS 2012 tabular instance
installed. It was an informative topic as it bought out the differences between
SSAS traditional MDX based and tabular solutions. One of the key differentiators
is that SSAS is disk based and tabular is in memory and provide good speeds.
The limitation of the tabular model would be size of memory that can be used to
hold tabular models. Also with tabular you don't have writeback capabilities. The demo was pretty impressive, some of the steps done in
the demo very similar to how one would go about building SSAS cubes. The
topic was presented by Bill
Anton and his blog is located
at:
The third
session which attended was the most crowded of all the sessions. It was about
Best Practices for delivering BI solutions. The audience were all BI
professionals and also were with IT in various companies. SQL Server BI Expert James Serra provided a great session on best
practices. He started off with why BI projects fail, a Gartner study has
revealed that around 70% of BI projects fail. The fail factors ranges from lack
of expertise/experience, lack of proper communication, requirements and project
management. He highlighted the issues of clash between Business and IT over how
the solution should be delivered. One of the interesting aspects that was
mentioned was for technology to provide a clean data ware house and then build
some abstraction layers on top of the warehouse, once that is done allow the users
to build the users to utilize self-service BI solutions.Understand the difference between kimball and Inmon priciples, cappabilities of tabular vs Multidimensional, star schema vs relational. Please visit his
website for some great information on this topic and other aspects related to
Datawarehouse.
The final
session I attended was about Introduction to Microsoft Azure
Machine Learning, this topic was really exciting and informative. The
topic is basically about predictive analytics, the tools
provided by Microsoft as part of the Azure framework. Microsoft has a
group called Microsoft research which initially working on developing
algorithms for the Data mining product as part of SSAS. In the past few years
there has not been much push on data mining within SSAS. Recently
with Microsoft making a big push for cloud services offerings as part
of the Azure framework, all the Microsoft research are now being made
part of Machine learning using Microsoft Azure. He is a link that provided
a head start on the Machine learning topic: http://azure.microsoft.com/en-us/documentation/services/machine-learning/.
During the Session Bill
Carroll provided an
insightful demo of the machine learning framework within Microsoft Azure.
Azure provides a very nice framework for a developer to set up an
experiment. The workspace for setting up experiments look very similar to SSIS
workspace within visual studio. As part of the setting up experiment, one feeds
data to an algorithm based on the type of outcome that is needed, then the
model is trained, scored and evaluated, the model can be published. Once the
model is published, it can be called through an application using a web
service. There are code snippets available in C#, Python to call the web
service to execute the model. Some of the examples of machine learning are when
a user purchases a product in amazon, there are a list of recommended products
that are generated, list of recommended movies in Netflix based on
users watching preference, and these are some of real life examples, where
Machine learning is used.
No comments:
Post a Comment