Why I created a blog

Its been four years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Tuesday, July 7, 2015

Exalytics X5-4 Fast and Furious

I love going to KScope because I learn about new features and products. This event was no different. In the Sunday symposiums with Oracle there was a discussion on the new Exalytics X5-4. It was only Sept last year at Open World when the X4-4  was announced . Edward Roske talks about it in his Blog. It was a big deal then. With the introduction of the X5-4 only 9 months later it becomes even bigger, better and “Badder”. With the X5-4 we go to a max of 72 cores, up from 60 and more memory. In addition to more cores, the X5-4 supports a new NvMe High bandwidth flash technology that improves throughput by 2.5 times. I won’t bore you with the details if you want to read about them , here are the specs

To me the most remarkable thing about this is you get more and the price has not increased. All of the way back to the X3-4 the price has remained the same. With a list price of $175K it is what I consider cheap.

As John Booth mentions in his Blog, you can get this as an X5-2 configuration as well offering additional flexibility. Note I had a correction from John. The X5-2 was more a wish from him than a reality. While you could create a X5-2 using sub-capacity licensing, you are still paying for the physical cores (Thanks Steve Libermensch for that clarification)

For us in EPM it keeps getting better and better.

Monday, July 6, 2015

Essbase Studio patch

Well, I survived KScope. It was a very good event with participants getting over 175 sessions related to EPM/BI.  I sat in a number of sessions and was impressed with the quality of the speakers and presentations.  I also had the opportunity to speak in 4 sessions and I think they went pretty well, at least from the questions people asked.

Patch came out the other day and I read through the readme file. There were only two changes and one document change. 

The first bug fix relates to a problem with stored dimensions (I assume ASO) where it would not let you use external consolidation operators. 

The documentation change fixes the statement that you can drill through on any level of a hierarchy including the top level. That is incorrect, you can’t drill through from the top member of the hierarchy (The dimension name).

The most intersting bug fix is the second one and I’m surprised they are calling it a bug as it used to be described as a limitation. When doing a drill through report on a recursive hierarchy, the drill through would fail with an error message if there were more than 1000 level 0 members returned in the query.  For recursive queries, Essbase Studio created an IN clause with the list of level zero members under the selected member. The 1000 member list was a limitation for Oracle as that is the maximum number of members allowed in an In clause. I’ve not been able to test this yet and wonder how development got around that limitation.

I guess the moral of the story is , even if something is listed as a product limitation, still submit bug and enhancement requests and it is very possible what you need will be changed.

Monday, June 8, 2015

Don’t believe everything you read (again)

I got an email from my boss Edward Roske about an entry in the Tech Reference. He is working on a cool super secret project (all will be Reveled and revealed  at KScope) and he asked me about something he saw in the Tech reference on the AGGMISSG command.

For those of you who don’t like to read the tech reference I’ll save you the time going to it.


Specifies whether Essbase consolidates #MISSING values in the database.

The default behavior of SET AGGMISSG is determined by the global setting for the database, as described in the Oracle Essbase Database Administrator's Guide.




SET AGGMISSG commands apply to calculating sparse dimensions.



See Also

  • SET Commands


    What struck him as funny and me as well was the statement

    SET AGGMISSG commands apply to calculating sparse dimensions. (my highlighting).

    Neither he nor I could remember it acting that way. I reached out to MMIP Cameron Lackpour and he opened his System 9.3.1 tech reference and it said the same thing.

    Thinking this can’t be right, think Planning with upper level periods allowing input and being dense, I decided to test it.

    Using Cameron’s FDITHWW sample Basic, I cleared all the data and set the upper levels of year to be stored.

    I used Smart View to populate the following intersection


    (Note Profit shows up because Measures is dynamic)

    I then ran the following calculation script:



    Calc dim(Measures,Year);

    Agg (Product,market);

    Here are my results:


    As you can see, my dense dimension acted like Edward and I expected, with it ignoring the #MISSING children and keeping Q1 and AGGing it up to Year. This means  the Tech reference is slightly askew.  

    As a side, there is something else in the  Tech ref example, If you look there is a statement:


    I’d never heard of it and a search of the Tech reference has the only reference in the Aggmissg example. Trying to run it gives an invalid syntax so this is inaccurate as well.

    I will be submitting both of these opportunities to the Documentation group as they actually do fix these type of errors when they are found.

    Moral of the story, Even if you read it in the Documentation, try it yourself and you might be surprised at the results.

    Monday, May 11, 2015

    ASO calculation bug


    Note, It funny how things work out. While I’ve not tested it out yet, a patch set update (PSU) 20859535 appeared today  after I created this post.

    Defect Number Defect Fixed

    MDX formulas are not calculating correctly for parent members of the accounts dimension, which are tagged with time balance properties and compression, in an ASO cube where the parent has more than one child.

    This is the bug I reported last month.So it appears it might be fixed. I just have to now test it.

    Glenn 5/11/15

    I am a creature of habit. I have done the same calculation to put YTD net income into Retained earnings in too many cubes to count. In My ASO cubes, I know that I have to set the solve order higher  than normal for the ancestors of my calculated retained earning  member to get it to roll up properly. and it has always worked.That is until now. I’m working on and have run into an interesting issue.

    My retained earnings calculation works if I am at individual periods, but does not work if I’m at total Periods. In addition, it works if I am at the single stored member of my View dimension  but not if I expand the View dimension. The stored member value actually changes.  In tracing through the issue, It appears the formula for retained earnings is not firing when I’m at Total Year or when I have multiple member of my dynamic View dimension.

    . I was able to find a work around. Instead of allowing the parent of my retained earnings calculation be a natural rollup, I forced it to be a calculated formulaic member that adds up its children. That apparently is enough to force the calculation to occur and it properly rolls up to all of the ancestors.


    I don’t particularly like this solution as it means that If the users add a new account then the formula has to be changed as opposed to the hierarchy rolling up correctly.

    This is also part of a bigger issue. During my testing of formulas in a “View” dimension, I had issues where the formula would not work at a parent account level, but would at the child level. Oracle has confirmed this bug and I was able to get around it by setting the Accounts dimension as a default higher solver order.

    Again while this works, IT is different from every other earlier version. My advice is if you upgrade check your calculations very carefully across all of your dynamically calculated dimensions. Don’t assume things will work hunky Dory.

    Thursday, March 19, 2015

    A quick tip for Dataexport

    I love the dataexport function in calc scripts. I tend to use it a lot for both writing data to flat files and to relational databases. I’ve written multiple blog posts on it.

    Today, I got an email from a fellow consultant who was having problems with it and needed help.  It took me a few emails back and forth, but I was able to help them. I decided to post it so we all don’t run into the same issue.

    The original email was

    “There is a sparse dimension that is dynamically calculated in an app.

    I want to export a parent which is dynamically calculated, but even when setting the data export with DataExportDynamicCalc ON;  it still exports the level 0 for that dimension. “ If I change that dimension to Dense, then it exports what I want, the parent.  I even fix on that parent member but it still exports level 0 of that parent.”

    I first responded asking if the member name was explicitly in the Fix Statement and if SET DataExportLevel ALL; was set.  I was assured it was. I was sent the whole calc and it looked good.


      DataExportDynamicCalc ON;
      DataExportLevel ALL;
      DataExportColHeader "Period";
      DataExportOverwriteFile ON; };


    FIX ("1st Pass","Final","Budget", "Actual","FY15", @relative("YearTotal",0),

         "ALY","SAP CC 1000","760","U-ctID","NZU72200",@RELATIVE("Cost Category",0))

         DATAEXPORT "File" "," "TESTENABLE.TXT";


    I was about to write back that that I was stumped and remembered something they said that I skimmed over the first time.  “If I change that dimension to Dense, then it exports what I want”

    Hmmmm. I started to think about the difference between dense and spares dimensions and how Essbase works. It worked on a dense dimension. OK, So it pulls in the block and can calculate the dynamic members. OK, that is reasonable.

    A sparse dimension. Wait a second, there is no block for a dynamically calculated member. In this case, the block is calculated upon retrieval. By default, the fix would bypass empty blocks.  I looked at the set statement again and it hit me. There was no statement for emptyblocks. I remember it because I always include it and turn it off in my scripts so I know it is off for sure.   I knew there was on and looked it up in the tech reference. SET  DataExportNonExistingBlocks ON|OFF

    The tech reference describes this function as:

    Specifies whether to export data from all possible data blocks. For large outlines with a large number of members in sparse dimensions, the number of potential data blocks can be very high. Exporting Dynamic Calc members from all possible blocks can significantly impact performance.

    again hmmmm. All possible blocks.  I had the consultant add this to their extract set it to ON and try it. I did warn them this could be slow, but they told me it was a small outline.

    Lo and behold it worked like a champ. IT is a valuable lesson. With us typically wanting to improve performance we turn things on and off without even thinking about it. Sometimes we have to go back and reevaluate our options.

    Monday, March 2, 2015

    We all need to thank Applied OLAP

    I typically don’t single out a person or company in my blog, but am doing so today. Tim Tow, Oracle Ace Director, Owner of Applied OLAP, Essbase friend and evangelist, on his blog announce the release of newest version of the Next generation Outline extractor .

    Why the big deal? Why am I praising him? First, Tim maintains the code for his love of Essbase, he makes no money from it. Second, it cost him money. Time taken from billable work to make changes is a cost, plus, he has his help desk support people assist anyone with a problem again at no cost.

    That is all well and good, but the final thing is his responsiveness in improving the product. I emailed Tim on a Wednesday asking about missing features of the relational extract. Tim and I exchanged a few emails about what I would like to see and how I thought it should work.  By Sunday, I had a beta version of extractor with all I asked for and more.  I know from Tim’s questions that I was not getting work he had already planned, but that he had modified the product for me.  After my testing the changes (I found no bugs), he has released it to the Essbase world.

    We all need to tank Tim and Applied OLAP for their continued support of the Hyperion community. I don’t work for Tim, but do think his products are awesome. It is nice that he puts as much care into the free products he supports as he does for his fantastic Dodeca product.

    Friday, February 13, 2015

    FixParallel–How fast is fast?

    Addendum to this blog post.

    I lied, I lied, but not intentionally. When I wrote this post, I thought it was true and the timings were as I saw them, but upon further investigation, It turns out I had encountered a bug with FixParallel where including the set DataExportRelationalFile  on causes some problem with data exports using FixParallel.  The export only returned data for one node of the hierarchy not the entire hierarchy as it should have.  Unfortunately, I don’t have access to rerun the tests again. This issue was fixed in and I think a patch for that had not been installed on the system I was using. I’m guessing the performance will still be faster than without it, but can’t give you correct numbers

    Sorry if I misled you


    I have finally been able to use FixParallel introduced in on an Exalytics server. I’ve used it for for calculations and dataexports, so how fast is it and does it really make a difference?

    For my allocations calculations, I really can’t tell you how much of a difference it made, but I know it was a lot faster to do my allocations with FixParalled than without it. I just didn’t capture the times.

    For my DataExport, I was able to measure the difference.  I was exporting 1083702 level 0 blocks in column format with a block size of 9984b. I created a Dataexport calc script and set CalcParallel to16 in the script. Running it took 336.95 seconds. I thought that was reasonable, but I wanted better.

    I changed the script to use FixParallel using 16 threads across my location dimension which has abut 800 members. The calculation took 9.94 seconds. If I multiply out that number by 16 I come up with 159.04 seconds so it it telling me the FixParrallel calculation is improving performance more than just the parallelization of the script.

    What I did not expect is; just like ParallelExport, the FixParallel dataexport created a file for each thread, so instead of one file I ended up with 15. They were named with a suffix of _T? where ? was a number between 1 and 15.  (not sure why I didn’t have 16 files).  I also don’t know what would happen f the file size spanned 2 gig. Would it append a _1 to the file name? I tried reducing the number of threads  to 3 and reran the script. Alas, I ended up with only three files so I can’t give you an answer. But interestingly the script took 690.63 seconds, much longer that the script without FixParallel, so apparently there it tuning we can do to the script.  I could try including another dimension in my FixParallel, but am happy with my less than 10 second export. Perhaps a test for another day.

    So is FixParallel worth it, my testing says YES! FixParallel for me was an awesome new feature and one I will use often.