Why I created a blog

Its been many years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Thursday, December 16, 2010

Fragmentation in ASO Cubes

It seems Everything old is new again. For years we have been defragmenting BSO cubes by exporting the data, clearing the cube and then re-importing the data. When ASO came along, I thought “Great since there are no blocks I won’t get fragmentation and don’t have to worry it any more” . Turns out I was wrong.
I was at a client and needed to make a copy of their 50 gig  production cube  onto their test system to do some work with it. I exported the data, copied the outline over to test and re-imported the data. I looked at the file system during the load to see how far along and was surprise when the cube stopped loading the .Dat file was only 9 gig. I ran the materialization of aggregations using a saved aggregation from query tracking and the cube grew to 25 gig. About half of the size of the production cube.
There are a couple of things to know.
  1. During the year, the client deletes some data out of the cube by deleting the members (old years)
  2. The cube contains no slices
  3. The cube gets an incremental update multiple times a month by clearing out the current Month/Year and reloading the data
  4. The cube has been in production for about a year and the outline gets new members added on a regular basis.
Since this worked so well on test, we took the original exports and cleared the production cube and its size is now comparable to the test cube (we actually added additional aggregations which makes it a bit larger).  OK lesson learned, the client will now be adding a process to periodically rebuild their ASO cube. I did not do a compress of the outline, but this is something else they can add to their process. Who would have thought!


srx said...

Glenn what about about the ESSCMDQ utility - is its use restricted to structure fragmentation or may it be used for data fagmentation as well?


Login "svr" "admin" "password";
Select "app" "db";
Openotl "2" 1 "app" "db" "db" "y" "y" 0;
Writeotl 1 "2" 1 "app" "db" "db";
Restructotl 1;
CloseOtl 1;
Unlockobj 1 "app" "db" "db";

GlennS said...

I've not checked it for data restructure. I have used the utility for outline compaction. I wrote about it a few months ago

Anonymous said...

Hi Glenn:

Thanks for the post!

I am curious how does your client incrementally update multiple times a month by clearing out the current Month/Year and reloading the data without using slice in an ASO cube? I thought slice is the way to do it?


GlennS said...

I have them using a MDX statement inside of MAxL to do a physical clear of the data. It is slower than doing a logical clear, but avoids some of the issues of a logical clear.

Gyanesh Tiwari said...

Hi Glenn,

Can throw some light on how to determine whether a ASO cube is fragmented and needs a defrag?


GlennS said...

It this client's case, they use a MaxL statement to clear data from their cube (not slices) and reload data multiple times a month. We think this is causing the fragmentation (but I can't prove it) is is also possible that merging slices could cause the issue.

As for outline fragmentation, changes to the outline itself cause that and ASO outlines continue to grow with every update.

Anonymous said...

How much did the fragmentation impact retrieval performance?

er77 said...

Is it this issue available now ?
No new suggestions ?