For years now we’ve been inundated with the message that to get ahead, your business needs BI – and it’s true! The days of being able to use a binder full of static, tabular financial reports based on data from “last month's sales” to run your business are long past. In order to beat your competitors to the sale, in order to keep your costs in check, in order to <insert business decision here> you need immediate access to all of the data your company produces. Even more importantly, you need access to the knowledge produced when all of the “dots” between the points of data are connected! Oh, and you should present this knowledge in pretty, easily digestible graphs and charts so decision makers can quickly consume it between meetings and drill down wherever more answers are required.
So what is BI? BI is many things to many people, and if I were writing this post from a slightly different angle, even my answer would be different. For the purposes of this post we will use “BI” to represent the process of gathering key business data, preparing it for analysis, and helping our users to make better business decisions by presenting it in a meaningful way.
A BI infrastructure generally has four key components:
First, you need some source data. At its purest, source data is any data stored in tabular format. Normally this data would be coming from a database, but an Excel spreadsheet or text file could just as easily store source data. When the source data is stored in a database, the database is generally referred to as the Online Transaction Processing (OLTP) database.
The Extract, Transform, and Load (ETL) process is the next key piece of the BI infrastructure. The ETL process takes the source data (extract), prepares it for use in your BI analysis by applying any business rules and calculations (Transform), and copies the data to a data warehouse that has been structured specifically for Analysis (Load). This process can also be used to “scrub” the data, removing any known inconsistencies that would hinder a proper aggregation later. When starting from scratch, the ETL process can be one of the hardest elements of the BI infrastructure to design and build. The ETL process is also the start of what makes BI so efficient. The data you are going to need later is already processed and structured for FAST retrieval.
The next element of your BI infrastructure, and some would argue the most important, is the Online Analytical Processing (OLAP) cubes. Cleansed, prepared data is divided into Measures and Dimensions. Measures are the aggregated numbers upon which your KPI’s, graphs, and reports will be built. Dimensions are all the points of data “about” the measures. Dimensions might include things like Time/Date, GL Account, Customer, Vendor, or Item along with all of the attributes of each. The attributes might include customer/vendor address, or item colour, or account segments. Essentially, any data related to a transaction in any way could become a dimension of the transaction, while the transaction amounts and quantities would become your measures.
An OLAP Cube identifies the relationships between all of the dimensions, and aggregates all of the measures based on those relationships. Depending on the size of your data, the number of dimensions, the complexity of the relationships, and the number of measures this aggregation could really take a long time and in a traditional reporting environment might not even be possible. In order to increase the speed and make analysis possible, measures are often aggregated and/or calculated on a pre-defined schedule. Typically this is done overnight, but frequency will be determined by many different factors. Once the cube is “processed”, measures can quickly be “sliced and diced” based on dimension attributes.
The first three elements in your BI infrastructure are primarily building blocks, and are only ever accessed by developers and DBA’s. In order for your users to get the benefit of all that work, you will need some way to represent the data and actually extract some of that knowledge for them. In most cases this is done with some type of reporting or desktop analysis tool. The sophistication of your reporting tool will determine just how pretty and/or functional you can make the representation of your data.
One of the things I am often accused of is putting too much information in my posts – making them too long. I can only hope you are still reading at this point because now I get to tell you the cool part: if you own Dynamics GP you own a fully functional BI Infrastructure that you can start using today.
Using my same bubbles from above, here is what you get when you install Dynamics GP’s Analysis Cubes Server:
The source data for Analysis Cubes Server is Dynamics GP. During the Analysis Cubes Server installation process, you will identify which companies and modules you would like to include in your BI environment. If you are in a multicurrency environment, you will also be asked to set up translation between the companies, just to make sure your resulting reports and KPI’s will make sense.
During the Analysis Cubes Server installation, a series of SQL Server Integration Services (SSIS) packages are created to perform the ETL. SSIS is included with your SQL Server license, and should have been installed when your DBA first set up your Dynamics GP environment. The SSIS packages have been designed to copy the data from the Dynamics GP companies and modules that you selected, apply standard Dynamics GP logic such as translating various fields, perform some basic calculations such as net change, and load the data into the Data Warehouse. The SSIS packages alone can cut days, or even weeks, off of a “from scratch” BI implementation project. The SSIS packages should be scheduled by your DBA.
Analysis Cubes Server will also create a series of Analysis Cubes on your SQL Server Analysis Services (SSAS) server based on the modules selected. Like SSIS, SSAS is part of your SQL Server license and should have been installed when your Dynamics GP environment was first installed. Notice that I said that the cubes are module dependant – and I didn’t mention company this time. The data from ALL of the selected companies is aggregated into the same data warehouse, which is what made your choice on translation so important at installation. Each cube comes complete with a full set of dimensions and measures appropriate to the module that the cube is based on. These dimensions and measures are based on an out of the box implementation of GP, and users can be slicing and dicing within minutes after installation completes. One of the hardest things to do in a BI implementation is identify what users want to measure and what dimensions they want to measure on. Giving them a default set can eliminate many frustrating meetings from the implementation process and ensure that there is meaningful data on “day 1”.
Like many other parts of Dynamics GP, Analysis Cubes Server uses Excel to give users access to the OLAP cubes. Many of the BI enhancements in more recent versions of Excel have been designed specifically for OLAP type analysis, and there are lots of posts like this one from Belinda Allen showing you how to take advantage of them.
For the less technically inclined, Dynamics GP even gives us a great starting point in the Create Pivot Reports window, which literally creates a basic Excel Pivot Report based on selections user makes. The window can be seen below.
While the Analysis Cubes Server should not be considered your ending point as far as BI goes, it is an amazing starting point that very few other accounting systems can boast out of the box. For those who are starting from nothing, it also cuts weeks of development time and guesswork out of the implementation process.
Do you want to get started with BI?
Please contact me to discuss how we can decrease the time it takes your company to get up and running with BI: rod@briwaresolutions.com, 844-BRIWARE (844-274-9273).
By Briware Solutions, http://www.briwaresolutions.com