Why I created a blog

Its been many years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.


Thursday, July 10, 2008

The Power of Report Scripts (Part 3)

I returned from the ODTUG Kaleidoscope conference 2008 tired but refreshed. It was fantastic to spend 5 days in the same room as 150 other Essbase geeks and gods and nerds. If you were there, decide which category you fit into. Needless to say I learned a lot and got to meet many whose names I’ve seen on the discussion groups and was able to put faces to them. It was definitely worth attending. It was the best technical conference I have ever attended. Plan on attending next year you will not be disappointed (If you like technical stuff like me). I’ve heard the Hyperion tracks are being expanded to include other products besides Essbase. We were the minority this year; I can see us taking over the conference next year. Because of spending a full week away from work I get very behind and my blog took a back seat because of it.

In the first two parts of my blog, I concentrated on how to use report scripts to help automate and retrieve data from Excel, in this part, I will go over some of the ways to use report scripts stand alone and offer a few optimization tips. Here is one tip that is applicable to both the Excel and standalone versions to start with. There is an undocumented command called <Preview, use it is like using “Navigate without data” in Excel. It brings back the format of what you want with no data(well it brings back zeros for the data). This is wonderful if you need to get lists of members but do not get bogged down trying to get the data. It’s much quicker!

There are a few reasons people use report scripts, but I’ll admit they are not used as much any more as they used to be and almost never in the way they were originally developed, as end user reports. People typically create report scripts to extract a subset of data to load to other systems or other cubes. There are more options available now to get data out of Essbase than ever before (depending on the version you are running). You can export the database, use the Dataexport commands in calc scripts (System 9), use the API, use replicated partitions to move data to another cube, use Java custom functions and there are even third party tools available to help you export data. With all that, if you still want or need to use report scripts to get out data, I’ll discuss some of the tips I’ve run across to make your reports run faster.

The first tip seems so simple; I almost hate to mention it. In Essbase class you were taught the most effective way to load data was to put the data in the order of dimension starting with the sparse and ending with the dense dimensions. You do remember that don’t you? If data goes in better this way, it is logical that it comes out better this way. (who would have thought!!!) Have dense dimensions as column members and sparse members as row members and page members in the order they appear in the outline. This allows Essbase to process each block once and does not have to revisit them. If you have to have a dense dimension in the rows, make it the last row member.

Second, let’s talk about the data you extract. Is there really a reason you need rows upon rows that have missing data or zeros? Turn on {SUPEMPTYROWS} to shrink your output, most of time by many factors. While talking about missing data, don’t you just love the #missing you get for missing data? I know I do, but that is 8 characters plus the delimiter and spaces getting returned for every missing entry that I don’t need. Change it to something real short like a space or zero or how about nothing. That is the same as I get paid to tell you all my secrets. Use the command {MISSINGTEXT ""} to do this.

Third, there is a command to stay away from, its DimBottom. Personally I like a well lit bottom, but that’s another story. Like @CURRMBR in calc scripts, DimBottom is one of those commands that is very slow. If you use Query designer to create a report, it uses a better way. It uses the link command. In sample.basic to get all level 0 products it uses
<Link ((<LEV("Product", 0)) AND ( <DESC("Product")))

Here are a few other things you can do to make sure you get the results you want in a report script. Turn on <ACCON. This command accumulates member names even if they are intermixed. If I have
<Row(Product,Market)
“100-10”
“200-10”
“East”
“300-10”
“400-10”
And don’t include ACCON, it would only give you east and “100-10” and “400-10”. With it on, you will get all the products.
Turn on <SYM symmetrical reports are quicker than asymmetrical.
Avoid calculating columns and rows. Pulling the data directly is faster than having it perform calculations.
Reduce the number of decimals returned. Use the {Decimal N} command to do this. It will reduce the amount of data returned.
Here is a sample header I use when I’m setting up data to be transferred to another system:
{ DECIMAL 2 // set decimals to 2 positions
SUPALL // suppress pages, headers, commas, brackets
MISSINGTEXT "" // set the missing text to nothing
NOINDENTGEN // turn off any indentation of member names
ROWREPEAT // repeat the row information
TABDELIMIT // Delimit the columns with tabs
}
Notice all of the commands are within one set of brackets. If you prefer you can bracket each one, but I like grouping them together.

There have been a couple of questions about report scripts on the different forums I frequent.

The first is “Can I get column heading for the row members”? The short answer is no. You might be able to simulate it by using custom headers, but since most people want these headers for creating column names when importing data into relational databases, my preferred method is to have a static file with the headers. After creating the report extract file, I concatenate my header file with my data file.

The second is “How can I export the entire database (or a subset)? Doing it is so slow”. Depending on the size of the database, a report script can be somewhat slow, but using the hints I gave before, can speed it up. Give my suggestions a try and see how it works for you.