Tuning Time Series Metrics

In this post I am going to explore some performance issues related to OBI’s time series functions.  Released back in OBI 10g, the ToDate() and Ago() functions brought a significant improvement to the process of easily creating a variety of time series metrics.  In older versions of Siebel Analytics, creating time series was a very manual effort involving a lot of aliases and special joins that could at time become a little confusing to the developer.  They did have a wizard called the Time Series Wizard to assist, but if you are like me you never use wizards J.  The Time series functions however solved that; using them is a piece of cake, requiring only a minor enhancement to the Date dimension.

All is rosy with the world then, correct?  Well not so fast.  The reality is that these functions do some very strange things behind the scenes in order for them to work properly.  So strange in fact that the database engine typically has some difficulty figuring out what to do.  One thing I’ve learned over the years when it comes to database engine performance – keep it simple if you want it to run fast.

As it turns out these strange things that OBI does for the Time Series functions in fact cause a decent performance hit when compared with the old technique.  This short post will discuss this in more depth.

The New Way

The SQL that is generated by time series functions does some rather unusual things.  Shown below is a query template taken from a real YTD report.

SELECT <a bunch of metrics and fields>
FROM <a bunch of tables>,
(select distinct min(D1.c1) over (partition by D1.c4)  as c1,
    min(D1.c2) over (partition by D1.c4, D1.c5)  as c2,
    D1.c3 as c3
from
    (select Case when case D1.c6 when 1 then D1.c4 else NULL end  is not null then
        Rank() OVER ( ORDER BY case D1.c6 when 1 then D1.c4
        else NULL end  ASC NULLS LAST ) end as c1,
        Case when case D1.c7 when 1 then D1.c4 else NULL end  is not null then
            Rank() OVER ( PARTITION BY D1.c5 ORDER BY case D1.c7 when 1
            then D1.c4 else NULL end  ASC NULLS LAST ) end as c2,
            D1.c3 as c3,
            D1.c5 as c4,
            D1.c4 as c5
        from
            (select T31328.ROW_WID as c3,
               T31328.DAY_DT as c4,
               T31328.FSCL_YEAR as c5,
               ROW_NUMBER() OVER (PARTITION BY T31328.FSCL_YEAR ORDER BY T31328.FSCL_YEAR DESC) as c6,
               ROW_NUMBER() OVER (PARTITION BY T31328.FSCL_YEAR, T31328.DAY_DT ORDER BY T31328.FSCL_YEAR DESC, T31328.DAY_DT DESC) as c7
              from
               W_DAY_D T31328 /* Dim_W_DAY_D_Common */   ) D1    ) D1
     ) D24,
 (select distinct min(D1.c1) over (partition by D1.c4)  as c1,
    min(D1.c2) over (partition by D1.c4, D1.c3)  as c2,
    D1.c3 as c3
from
   (select Case when case D1.c5 when 1 then D1.c3 else NULL end  is not null then Rank() OVER ( ORDER BY case D1.c5 when 1 then D1.c3 else NULL end  ASC NULLS LAST ) end as c1,
     Case when case D1.c6 when 1 then D1.c3 else NULL end  is not null then Rank() OVER ( PARTITION BY D1.c4 ORDER BY case D1.c6 when 1 then D1.c3 else NULL end  ASC NULLS LAST ) end as c2,
     D1.c3 as c3,
     D1.c4 as c4
    from
     (select T31328.DAY_DT as c3,
       T31328.FSCL_YEAR as c4,
       ROW_NUMBER() OVER (PARTITION BY T31328.FSCL_YEAR ORDER BY T31328.FSCL_YEAR DESC) as c5,
       ROW_NUMBER() OVER (PARTITION BY T31328.FSCL_YEAR, T31328.DAY_DT ORDER BY T31328.FSCL_YEAR DESC, T31328.DAY_DT DESC) as c6
      from
           W_DAY_D T31328 /* Dim_W_DAY_D_Common */  ) D1  ) D1
     ) D23
WHERE <some joins>
and FACTS.DATE_FK_WID = D24.c3
and D23.c3 = <some date>
and D23.c2 >= D24.c2 ) ;

 

I’m not going to bother explaining what those in line views actually do and how they work as honestly I don’t know nor do I care to spend time looking at them.  I’m going to take it on faith that they work.

The point here is that for optimal performance on dimensional models, database engines (Oracle and SQL server at least to my knowledge) work best with Star queries.  Once you start to deviate from that approach, the database will not come up with the most optimal plan – it just gets too complex or there are things which flat out prevent it.

The query above does a “>” comparison on two of the inline views, D23 and D24, and then it joins the result to the Fact table.  Unfortunately, this does not result in a clean star transformation and thus its performance suffers.

The Old Way

The old way of mapping Time Series metrics involves a lot more work in the RPD and even some of the data model.  A brief comment about the word “old” – this technique is still heavily used by the BI Apps, even in the latest 7.9.6.3, so it is a technique that is still completely valid.  If you have a copy of the BI Apps, take a look for time series metrics and how they have been constructed.

The old way involves the exact same concepts as Canonical time, which is a super-conformed version of time.  Canonical time effectively means that different perspectives of facts are linked separately to a single date table but done so according to their own unique purpose.  Thus one fact alias may link to time on Create_Date_Wid while another will link using Closed_Date_Wid.  This allows for different metric calculations based on different times, but keeps only one single time to use instead of multiple ones.

The old way of time series does the same thing except to an even further degree.  Instead of linking on Create Date vs. Close Date, we link using Now vs. Last Year vs. Last Quarter vs. all days this year.  Each time perspective (whether it’s a prior time window or a to-date window) joins differently to a single and common time table.

The join below is for a Last Quarter fact table source.  Notice that its join links to a new field in the Date dimension called QUARTER_AGO_WID.

Doing so effectively takes records from last quarter and shifts them to this quarter’s date record which is ultimately used on your report.

The downside to all of this is that it might possibly be a lot more work to build.  The sample below is from the BI Apps 7.9.6.3 for HR Workforce Events – you can see how much manual work it can take:

For each of these Logical Table Sources (LTSs), you will need to:

  • Make aliases
  • Make physical joins
  • Change the join to time for each one
  • Map each into the Business Model
  • Set the content tab properly (identical to the non-time series LTS)
  • Map physical metrics

What ultimately happens when you do this however is a much simpler and faster query.  The following SQL template below is the same report as the first query but using the old technique.  The example again is a YTD metric, which requires a Range Join in the Physical Layer.  The range join results in a simple Between the start of the year and the current date:

SELECT <a bunch of metrics and fields>
FROM <a bunch of tables>
WHERE <a bunch of joins>
And FACTS.DATE_WID BETWEEN W_DAY_D.START_OF_YEAR_WID and ROW_WID
And W_DAY_D.DAY_DATE = <today>

 

Results

Using a real world customer system where I was called in to help with performance issues, the results for these two queries are striking:  The report improved from 35 seconds to only 5 seconds, a 7X improvement by applying a bit more elbow grease in the RPD and adding or using a few more columns in the Date table.

So Why is it faster?

Note: the information below pertains to Oracle databases; I am unsure how others handle this.

It all comes down to how star transformations work.  A star transformation basically does the following:

  1. Do your lookups and filters on your dimensions.  Get the list of PK values from each
  2. Pick the smallest set of records in the fact table, based on all of those dimensions at the same time

It is in step #2 where our friend the Bitmap Index does its thing.  Bitmap indexes actually have two key aspects about them that we love for BI systems.  The first one is the most commonly known use – for filtering on low cardinality fields such as Employee_Type or Order_Status.  Thus we build them on our dimensional attributes that we filter on frequently.

The second and lesser known feature is perfect for star transformations: more than one bitmap index can be used at a time to access a table.  If you were to build 15 B-Tree indexes on a fact table, only one can be used at a time, rendering the other 14 useless to your query.  For finding the smallest possible subset of records from a fact table, a single index is nowhere near good enough.  That is why the recommended technique is to build single column bitmap indexes on every FK in a fact table, regardless of its cardinality.  It’s the multi-use capability that helps queries here.

Oracle is thus able to do AND operations on multiple indexes at the same time before it actually goes to read the fact records:

WHERE Index(FK1) in (list of values)
  AND Index(FK2) in (list of values)
  AND Index(FK3) in (list of values)
  …

With this in mind, we can see how the old time series technique fits this Star Transformation pattern perfectly while the new technique does not.  It’s as simple as that.  Any YAGO query will look exactly like a regular query except for a different dimensional column being used, and a ToDate query will be a range of values to add to the in() list for the date FK.

Tradeoffs

The obvious tradeoff to consider is effort vs. performance.  If performance is more important to you and you have a large data set to deal with, then the old technique is more appealing.  If not, then go ahead and use the new method.

One additional item to consider is QA.  I strongly recommend beefing up QA on the more manual approach to ensure the developers did all of their manual work properly.  With the new time series functions, there is very little for the RPD developer to get wrong, and so much is piggybacked on existing objects that there is simply less that can go wrong.

 

 

 

 

 

 

Advertisements

Posted on November 9, 2012, in Performance, Repository and tagged , , , , , . Bookmark the permalink. 4 Comments.

  1. Hi Jeff,

    great post and nice to see it spelled out for once.

    As far as “you can see how much manual work it can take” is concerned: using UDML it’s actually quite simple and very fast to multiply and remap LTSs.
    Falls more into the “don’t do this at home kids”-category, but still an awesome tool of achieving old-school time series functionality.

    Cheers,
    C.

  2. Hi Jeff,
    I have stumbled across this blog after looking at a similar situation in a model I have been working on, although this was using a logical fact table aggregated to ISO Week. I have date tables of Date, Week, Month and Year and originally had these joined to an ISO Year & Week column on my aggregated FACT table, causing the ROW_NUMBER function to be generated in the SQL.

    To solve the problem I added a start_of_week_date to my fact table and added a join from the DATE dimension to my FACT table. The result gave me a very simple query and no analytic functions, just aggregate functions as hoped.

    I would be interested to know if you tried adding full date to your FACT table in the first example and joined this to a DATE, as it would appear you have used some sort of week id, unless I have mis-read the nasty SQL.

    Cheers.

    • Typically do not use a date field in the fact table for joining – use a surrogate numeric integer key. In my example I cheated a bit, but normally a day columns is captured in the fact as YYYYMMDD and week as YYYYWW, etc. Julian numbers are also possible, but they make management more difficult – so stick with the numeric keys.

      Jeff M.

  1. Pingback: Keep It Simple Stupid « Achieving Great BI With Oracle BI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: