Build a Performance Layer to Make the BI Apps Fly

The BI Apps from Oracle present customers with a nice head start to getting their BI environment up and running.  Financial Analytics, HR Analytics, Supply Chain Analytics, etc. all come pre built with much of the code and reports you’ll need to build to support your business.  But for many customers they just are too slow for their user community while running dashboards, reports and ad-hoc queries.  In an era where an internet search engine can give you what you want in one second, reports running for a minute or more are just not acceptable.

In this post I’ll discuss some of the inherent performance limitations in the BI Apps and what you should do about it.  Note the vast majority of customers really don’t have a performance problem with their system, but you can always deliver reporting content faster.  If you are running at 15 seconds per page, wouldn’t 5 seconds be that much better?  The performance problem really lies with some large customers with larger data volumes.  It is here where the BI Apps design can be enhanced with more performance in mind.

I’ve written about OBI performance a few times in the past, and I’m sure there will be more to come.  As a refresher, here are a few other posts to take a look at:

What are the BI Apps?

I’ve been asked many times whether the BI Apps are a Data Warehouse or a BI system.  To some the answer may be clear; but to me it is not.  They use a Star Schema model which is better suited for Data Marts and BI Applications.  But when you look at how the model was designed it actually is less suitable for being used by OBI dashboards than one might think.

One of the key differences between a DW and a BI system lies in how it was designed and for what purpose(s) it serves.  A Data Warehouse is not application specific; it is designed to be more generic in its capabilities and data and is thus designed with a bottom-up approach.  The bottom-up approach models data as it actually is and does not make assumptions or optimizations on how it will be used by an application.  (Note: 3NF is much better for this purpose than a Star).  A BI system however builds its underlying data model to support precisely how it will be used by its application layer.  Just in the same way you design an OLTP data model to support how the front end will use it, so too for a BI system.  For more on these differences, please check out this article.

So to summarize, a DW is generic while a BI system is specific.

So back to the BI Apps – is its model generic or specific?  The answer is somewhere in between – perhaps more on the generic side of the fence.

Note that the further to the right you are, the better optimized your data model is for the reports and queries you run on it.  It is better for your metrics and the fields that are of interest to you.  Any movement to the left by definition means non-optimized. The question then becomes:

Why So Generic and Non-Optimized?

There are several reasons for this, all of which are pretty straightforward to understand:

  1. Broad Industry Support:  Oracle has to account for such a wide array of customers, industries, roles, business models and company sizes that they have to make the model more generic to better serve a wider array of customers.  A system that is optimized for Company A will mean that it is not optimized for Company B.  Every company does things slightly differently, even in something as repeatable as financial reporting.  The 80/20 rule is at play here.
  2. Development Time:  This goes for both Oracle’s development effort as well as customer customizations.  A simpler design tends to have fewer objects which is great for building ETL code, but not great for being highly optimized.  For the product to be a success, customers must be able to implement it in a reasonable amount of time.  Imagine having to carry across a field in one place as opposed to 3 or 4 and you can see where the benefit comes into play.  Another example might be a degenerate field in a fact table – this way only one table is needed and not two.
  3. Variety of Source Systems:  Keep in mind that the BI Apps support several source systems with pre-built mappings.  Each system does things a little differently; thus compromises had to be made in order to make the BI Apps work across a variety of source systems.
  4. Weight: Similar to the development time item, Oracle could have elected to split many things up for performance reasons, but the counter to that is the product become heavier and more difficult to configure.  10,000 tables are heavier than 5,000 tables, even though the 10,000 table might be the faster solution.
  5. Legacy:  Some of the older modules were simply designed that way a long time ago, and changing the data model may have been deemed to be too much effort.
  6. Product Maturity:  The BI Apps are still evolving.  Some of the models are only a few years old and of the 1,000 items on the development queue they just haven’t gotten around to implementing all of them yet.
  7. Data Model Customizations:  Every customer will make customizations to the data model to support their specific needs.  Sometimes these customizations are forced into the existing model in a correct but sub-optimally performing manner.  Oracle has no responsibility here; this is all on the implementers and the amount of time & effort they have to do it correctly.
  8. Custom Reports and Metrics:  This is perhaps the most important item here.  Oracle simply cannot predict what your reports and metrics will be, so tailoring a data model to them is impossible ahead of time.  However, even note that if one were using 100% OOB reports (which is unrealistic to begin with), the underlying data model is still not a perfect fit, based on all of the compromises Oracle had to make above.

To summarize, in order to support a wide array of customers, each with unknown requirements, Oracle had to make compromises and get you most of the way there.  The BI Apps were never intended to be installed and work without modifications; they have always been a starter kit.  If they are missing something, then add it; if you don’t like their definition, then change it; if their aggregates and tables aren’t the best ones for you, then build your own.

That last line is one to key in on: if their aggregates and tables aren’t the best ones for you, then build your own.  We already know they are not the best ones for your project for the reasons listed above.  So, go build your own.

Architecture: Build a Performance Layer

Let’s build our own tables with a little bit of forethought first.  I want to introduce the concept of a Performance Layer to the BI Apps.  Shown below is my version of Oracle’s DW Reference Architecture, dated Feb 2010.

As this is a Complete Data Warehouse Architecture, it allows for both efficient storage of data in the Data Warehouse layer, but also allows for efficient usage of the data in one or more Marts in the Performance Layer.  These Marts can be called Data Marts or BI Applications, and they should be tailored to a form that is best suited to the tools being used.  Thus, a Hyperion Essbase Cube can be one of the data marts, as can a flat structure for a Data Mining tool, as can a highly optimized Star Schema for an OBI system.

Back to the discussion on whether the BI Apps Data Model is a DW or a DM (BI system), I think it clearly is not a full blown Data Warehouse.  But within its context of the subject areas it contains, it most certainly is – a subject area specific Data Warehouse that happens to use a star model instead of 3NF.  A Sales Analytics system is the Sales Data Warehouse.

With that in mind, plus the generic nature of its data model, there is room to build a Performance Layer to work with OBI.  This Performance Layer should be designed purely from a top-down perspective to optimally support the reports and metrics that are in your system.

Here is a brief analogy to help.  Consider your local supermarket.  Your supermarket is organized in a way which is generally easier to get at items than the warehouse it came from.  But there are so many things in the aisles that each person simply doesn’t need, and thus they have to go through all of the aisles to get their items.  Supposed you were interested in only Kosher items – you’d have to go through the whole store.  But many supermarkets have a Kosher section in them to make it easier and faster for those customers.  The Performance Layer is the Kosher section of your supermarket.

Performance Layer Techniques

What does a performance layer look like?  It is nothing more than the normal kinds of tools that we use for OBI performance tuning such as aggregates and mini-dimensions.  However as we’re starting with a pre-built model, we have the ability to use a few more tools here to move away from the generic data model and strive towards a fast and tight query specific model.

What should our goals be for building the Performance Layer?

  1. The first and overarching goal is to reduce and eliminate I/O with extreme prejudice.  I/O is nearly always the bottleneck in a BI system.  Even one I/O heavy query can bring your database to a crawl in impact all other users on the system; all of these I/Os must be improved.
  2. All Dashboard reports use only the fact tables in the performance layer.  As I will show below, in many cases the OOB tables are I/O hogs due to their sheer width.
  3. Go for Global benefits first then spot tune as needed.  A rising tide lifts all boats is the thought process here.  Develop a base layer for the Performance Layer which maps cleanly to the existing grains of the OOB objects but are smaller and faster objects.  If successful, the vast majority of all queries will use these smaller and faster tables, not just the dashboard reports as mentioned in #2.
  4. After the base layer, optimize for common use cases and reports and for high priority or difficult areas.  This is where your aggregation strategy comes into the picture.

Techniques and Tools

Below I will lay out a variety of techniques and features to build into your Performance Layer.  Many of these I’ve written about before (along with many others), but this list is specific to the BI Apps.

  • Table and Index Partitioning:  This is just about mandatory on fact tables these days, and in many cases on larger dimensions as well.  For example, if you frequently run reports for a single customer, consider hash partitioning the W_CUSTOMER_D table on that field to help the lookups.  Even the base OOB fact tables should be partitioned.
  • Star Transformations:  This also is mandatory; ensure you have set up your indexes properly and star transforms are enabled.  Star Transformations are designed to dramatically speed up queries which go after a small subset of data from a Fact table.  They do nothing for queries which are lightly filtered and have to examine a large amount of data.
  • Column Pruning/Skinny Tables:  Consider the columns in your table and the data that you actually use in your dashboards.  Most likely you are using perhaps only 20% of what the OOB data model has in it, even with many of the columns populated with NULLs.  Build tables that have only the items you need.  Build them as nightly rebuilds (either Create Table As Select or Insert /*+ APPEND */ statements).  Throw out all of the BI Apps accounting information such as insert dates and INTEGRATION_ID.  Include only those columns which will be well used – start with just a few and expand as you need to.  Have the mentality to keep these tables as skinny as possible – a column needs a strong case to be included.

This quite frankly is one of the more important tools to consider.  I have seen cases where building a fact table with only the FKs and Metrics needed can cut Average Row Length (AVG_ROW_LEN) storage down to 33% of the original table – that is a 3X improvement alone on pure I/O.

  • Table Compression:  To even further compact any table in the performance layer, build it with the COMPRESS option.  Do so on BTree indexes as well.  Anywhere from a few extra % to 20% of I/O improvement can be gained with this very simple option.  Note that you can use this option on tables in the OOB model, however be warned that table compression is terrible when a table has a lot of updates to it.  Since the tables in the Performance Layer are truncate and reloads, this should not be much of a concern.  Also ensure you have a large database block size of 32K (and not 8K or less) when using this option – compression happens within a block, so the larger the block the more effective the compression is.
  • Record Set Splitting:  Align the data in your tables with how the tables are actually used by OBI.  I have two good examples to draw on:
    • In HR Analytics, the primary Workforce table W_WRKFC_EVT_MONTH_F contains both transactional event records (e.g., a Promotion) and a headcount snapshot records.  However the OBI RPD maps this one table to two different Logical Table Sources – one for events and one for headcount snapshots.  Thus whenever a metric counting events is used, it has to deal with all of the headcount snapshot data, which is that majority of the table.  Thus, split this table up into a snapshot table and an events fact table, and map each appropriately to the different Logical Table Sources.  Any query using an event metric will run significantly faster.  In a recent client, the size of the Performance Layer table for Events was 56X smaller than the OOB table using the above 3 techniques, and that is with zero aggregation!
    • The W_EMPLOYEE_D table holds a variety of different Roles for people.  In fact the table is completely misnamed; it should be called W_PERSON_D.  Again, as OBI uses many different roles in the RPD, such as Contact, Applicant, Non-Employee, etc, split this table up to contain different groups of people.  In a recent system, employees in the table only constituted about 20% of all records – the rest were job applicants.  As a less optimal shortcut, you can use partitioning on this table, but make sure to add the appropriate filter key in the Business Model in the content tabs – for example APPLICANT_IND = 1.
  • Better Stars: In some cases trade-offs were made in the data model that hurt performance.  In places where you find the data model is not a clean Star schema, by either being snowflaked or normalized too much, fix it to make a simple, clean star.  The primary reason for this is that joins are expensive; try to eliminate them.  This is made very clear in Financial Analytics which employs a snowflake model on GL segment hierarchies.  Converting the model to a star alone will yield fantastic benefits.
  • Dimensional Aggregates & Mini Dimensions:  When large dimensions exist, say with millions of records, sometimes the join-back from the fact table to the dimension to retrieve data values can be the performance bottleneck.  For example, suppose only 3 fields are commonly used in a very large Customer dimension.  A Dimensional Aggregate might simply build a new table using a Select DISTINCT for those three fields bringing the row count down from millions to only a few thousand.  Any queries which use these in the calculations or report breakouts will benefit from looking up to a much smaller table.  Note I am not too worried about the filtering on these values using the smaller tables; bitmap indexes already do a great job there.  They help, but the join-back is the real reason for them.

The concept can be expanded a bit further across more than one dimension to be called a mini-dimension.  Imagine that you have lots of commonly used fields across many dimensions, each with only a few values.  Think of statuses, region, categories, classes, etc.  A mini dimension is simply a select distinct across those values with a new key assigned to it.  For example, you may have a mini-dimension that looks like:

Although there may be a lot of orders and a lot of products, there are only 18 possible combinations of Product Category and Order Status.  This mini-dimension can handle the heavy lifting for these fields while avoiding the big, slow base tables.  There is an added benefit too – only 1 FK is needed in any aggregates, reducing fact table row width some.

Although the configuration is a bit odd, and Mr. Kimball might not like it, I recommend adding these FKs to the base fact tables in as many places as you can, even the OOB tables.  The odd thing is that there technically are two FKs to the same dimension at times; one say for Order Details and one for the Mini dimension.  This allows OBI to write a query that on one hand hits the mini-dimension and on another hand can also still work with the base table.  Thus, you can get both the performance benefits plus the full complement of detail data when needed.

  • Aggregates:  Interesting that this is listed this far down on the list, isn’t it?  If you do all of the things above, there is a good chance that many of your reports will meet or exceed your performance thresholds.  But if some are not there yet, now is the time to start finding and building one or more aggregates.  Some pointers on how to do this:
    • Examine the default versions of your dashboard reports – their grains.  When first loading a page, typically not many selective filters are used, so a larger amount of data will be scanned.
    • Find out what other options on the page are more likely to be used – any commonly used filters/prompts, column selectors or even drill downs.
    • Try to build an aggregate which is just slightly below the highest level of the default pages. It is a trade-off between ultimate performance and a slightly lower performing aggregate but one that is used in more scenarios.
    • The more UI options you have, the more difficult optimizing for all of them will become due to the sheer # of combinations.  Note that Mini-dimensions will help out heavily here; many dimensional fields can be captured with only one FK.
    • Plan to build a 3 level aggregation structure in your Performance Layer:

    • Do not build an aggregate that works for only one report unless absolutely critical.  You do not want to get into that kind of setup – it will be unmaintanable.
  • Extreme Aggregates: If you have even more extreme needs after this, it will be time to start thinking a little bit more extremely.  Perhaps your security model prevents much aggregation at first glance.  I recently faced this issue, and the solution was to create a strange aggregate that was computed for each possible level in a multi-level hierarchy.  This meant that the table was not strictly aggregatable, as a given value would appear in multiple records.  However, since the security model filter was mandatory it was safe to do so.

Another thing to consider, to cut down on the number of queries that OBI typically generates, is at higher level to combine datasets back into one both in the data model and in OBI.  Going back to the Workforce example of Events and snapshots, consider an aggregate that contains a summary of both events and snapshots in the same table, and OBI was altered to simply map both kinds of metrics to it.  Instead of having different rows for events and headcount, the table would have a sum of events and the ending headcount by Month for example – thus one query could bring back both record sets.

Note: be very careful if you have prior period or Year Ago metrics; you’d have to account for missing combinations as time moves along (e.g., there was no Sales office in Boston last year but there is this year, or vice-versa).  This is why this practice is typically not done, but analyze it very closely if you intended to use it.

Recently, using these very techniques, I was able to get report performance from the range of 5-10 minutes to all being under 30 seconds with 80% under 15 seconds.  Some individual queries which used to take 3 minutes on the base OBI fact table for one month ended up responding in 0.5 seconds.  These techniques do work!

Building the Performance Layer

Short Answer:  Let your ETL guys figure it out.  Just kidding.

Long Answer:  It depends on a lot of things really.  How much do you want to use the DAC and the techniques used there?  How much time do you have to develop the code?  How much time do you have to run the jobs?  Do you feel comfortable using Materialized views?

I’ll just mention a few things here – you can disagree if you like that’s fine – this is just my preference to some degree.

  • I prefer not to expend the additional effort required to perfectly integrate all of these tables and indexes into the DAC.  It’s just too much work.  Besides, they will build much more quickly when the data stays in the database.
  • Thus I prefer to start with CREATE TABLE AS SELECT /*+ PARALLEL */ to get the aggs going, but when it comes time to productionalize them there should be DDL and the statement should be changed to an INSERT INTO SELECT /*+ APPEND */.  The benefit here is that Oracle will retain optimizer stats when the table is not dropped, allowing it to identify where to build histograms. Materialized Views are certainly an option.  Do not be too worried on the build time of the tables; it’s the indexes and the stats gathering that takes the most time.  Building some tables with 75 million records and 500 partitions may only take about 3 minutes or so, while the indexes and stats may take 25 or more.
  • In either technique, make sure that errors are captured by Informatica and handled appropriately.
  • As you have more time to get more sophisticated with aggregates, smarter and better techniques should be used to make sure your build times do not grow out of control.  Oracle’s Performance Tech Note V7 has a lot of information on this.  Consider saving and reusing stats nightly followed by a weekly stats gathering to help eliminate a costly portion of the build.

Posted on September 7, 2012, in Architecture, Data Modeling, Data Warehousing, Oracle's BI Apps, Performance and tagged , , , , . Bookmark the permalink. 2 Comments.

  1. Excellent post. Really helpful information… only trouble I see is when customers say “What do you mean I spent all this money on OBI Apps licenses and now I need to tear it apart and put it back together?”

    • Thanks for bringing this aspect of it up Doug. I treaded lightly on that topic here, but yes you are absolutely right. Perhaps I should update the post to be a little more succinct on that topic.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: