Why I created a blog

Its been four years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Wednesday, October 9, 2013

Exalytics T5-8 is here

While I fully expected my boss (Edward Roske) to blog about the new Exalytics box on his blog Looksmarter.blogspot.com, he has been silent about it. Rather than leave you in the dark about this, I decided I can’t wait for him spew the details so I’ll do my best to give you the info.

Prior to Oracle Open World (Sep 12th to be exact) a new price list was available and in the Exalytics section was this entry that we had not heard of before Exalyytics T5-8. There was no press about it, but at Oracle Open World a few weeks later, they talked about the box.

Here is what I found out.  Prior versions were labeled X2-4 and X3-4. Apparently the X stands for Intel , the 2 or 3 for the Chip generation and the last 3 or 4 for the number of sockets.  As Edward mentioned when the X3-4 came out earlier this year, there is an upgrade kit available for the X2-3 to really turn it into a X3-4.

So what is the new machine? It is listed as a T5-8. So T instead of X. Yep it is not intel chips but Sparc T5 processors. This machine runs on Solaris operating system instead of Linus and includes 4 TB of DRAM, 3.2 TB of Flash Storage and 7.2 TB of hard disk. This box comes with up to 128 CPUs, much more than the 40 you can get with the X3-4.


I’ve not had a chance to play with this box but have been told the main reason for is is scalability. It is meant for a large number of concurrent users. What I’ve not heard (officially) is how it performs vs. the X3-4.  Historically intel chips have been faster for Essbase than Sparc chips, and the paperwork says nothing about a performance comparison. I’m guessing it is a little slower, but with the ability to consolidate 3-4 X3-4 machines into one, the user scalability should be really good.

So how much will this box set you back? According to the price list, the box itself is $330K, pretty cheap. You do have a cost per CPU and user, which makes it much more, but that is not all that different from the older models.  It sounds worth it to me.  If/when I have a chance to test it out, I’ll let you know more.

If you like my brief summary here, I’ll be talking about Exalytics in more depth at the OAUG Connection Point Conference in our beautiful Capital , Washington D.C. on Oct 23rd. If the spending limit isn’t fixed by then, traffic in the city should be light! (This is not a political statement, just an observation).

Edward or I will also be talking about it at the Hyperion Solutions Road Show in So Cal on Thursday Oct 17th. It is downtown LA, so there will be traffic. If you want more info on that event email Danielle White at dwhite@interrel.com or register at So Cal Road Show registration  This event is limited to current and potential Oracle clients and not to partners, Sorry.  I hope to see you at one of the events.

Wednesday, August 28, 2013

Smart View hangs on Studio Drill through issue

It is amazing at least to me,  that I have two posts in less than a month.

Recently I had both a client and another consultant in our firm with the same (or similar) issues. They had implemented Essbase Studio and were using Rill through reporting. In one case, when a number of users would try to retrieve from Smart View , Smart View would hang for all users. It would last a number of minutes before it would give an error message. If the users tried to retrieve again, they would get another message about “the prior request is still running”. The other message that would appear was “Decompression failed”. A third instance, Smart View would hang for exactly 4 minutes, then free up.  If the Drill through reports were turned off, Smart View performance went back to normal.

A number of things were attempted to try to remedy the situation including:

1. Turning off compression in APS Essbase.properties file

2. Updated the registry on the APS server changing the port timeout from 4 minutes to 30 seconds

Neither of these worked and I was befuddled (This is similar to Elmer fuddled but you don’t try to shoot rabbits). Since I was not working directly on this and the client had turned to Oracle support with no real help, my colleague continued to carve away at it. His email to me speaks better of it than I could, so here is his synopsis of what he tried.

“ So we got something to work… However, the answer makes me think of the chicken dance, where chickens are given food at random intervals and start to develop a pattern of behavior from what they had previously done when the reward arrived. And in solving for this problem, that is exactly what I was doing: dancing like a chicken.

As previously stated we thought it had something to do with the ports. I knew ports were refreshed every 4 minutes, so I thought if it takes 4 minutes after SmartView freezes to come back … the answer must have to do with the ports being used and not available. We timed it … and it took 4 minutes exactly every time. Thus like the chicken, I did a little dance.

So we increased the ports, frequency of port refresh … however it did not work. We then checked the ports that were being used and found only 144 were being used when it froze. I then acquired another little move to my chicken dance. I was starting to move.

We tried the following: •Web logic – EPM Managed Servers Tuning

• APS – Essbase Property File Settings

• APS – Logging.xml

• Essbase – Compact Outline

• Essbase Config File

• Java Heap Size

• IE – Timeout Settings / Registries

Then I realized that if I stopped the application and restarted it, that it would immediately become available. No waiting 4 minutes. I was pretty sure that perhaps if I changed something in the Essbase config file I could get it to work. Now I was really dancing.

I started to look for settings that had a 4 minute time out setting… could not find any. I found a setting called Serverthreads … I decided to try it. So the next morning I asked the administrator to restart the server so we could test it. He made one additional change to increase the logging detail. We went ahead and tried it.

It worked!!! Now all we had to do was verify that this was really the fix.

We removed the serverthreads setting and restarted services, and it still worked. Wow, that was strange. What had caused it to work, since it was still working after removing the change and restarting services? We would need to retrace all of our steps.

So then we removed the detail logging. To our surprise it now failed. Wow … I was really dancing now. We tested this again and found that it only worked if we set Essbase log file to show detail.

My guess is that there is some setting that we can adjust so that we do not always have to have detail logging. However, I have been dancing so hard that I think it is time to pass this dance on to Oracle support.

Like random droppings of a positive stimulus I had danced long into the night finding the right patterns to get my next little dropping of reinforcement.”

As for the solution …

What worked was:

In Provider Services :


Original Entries :

  <logger name='' level='WARNING:1'>

  <logger name='oracle.EPMOHPS' level='WARNING:1' useParentHandlers='false'>

Modified Entries :

  <logger name='' level='TRACE:1'>

  <logger name='oracle.EPMOHPS' level='TRACE:1' useParentHandlers='false'>

“Why changing the logging level should have any impact … that I do not know!!  I wish I was smart enough to answer that.

We only stumbled upon this by dumb luck when the Oracle asked us to change the xml log so that we could send them the more informative log.  Like I said earlier … we were attempting things that made sense only to find that the thing that made no sense worked.”

So while I did not find the solution, one was figured out, If you run into this hopefully, you can benefit from our research.


On another un-related note, the second part of the Podcast Edward and I did with Kevin and Stewart is available. take a look, it was fun to do and I think informative. Here it is :


Monday, August 19, 2013

Pimping my Ride

The other day, Edward Roske and I participated in a podcast hosted by Kevin McGinley and Stewart Bryson called Real Time BI. We spoke with them on integration of Essbase and OBIEE. It was a really great time. If you want to see what I really look like or now monotone I really am or the witless  banter between Edward and me, take a look at part one  on YouTube (http://youtu.be/wwTIml_b4mE) or iTunes (http://bit.ly/QhwuSq)!

Part 2 will be out soon. In addition to being informational, it is also somewhat entertaining.

Tuesday, August 6, 2013

How not to reverse engineer an Essbase cube to allow drill through

It has been awhile since I’ve posted. No apologies but I was busy getting ready for KScop13 (which was a great conference. Sorry if you missed it). Then I was a bit burned out and needed time to recover.  In this post, I’m going to take a little from my KScope presentation of Advanced Studio Tips and Tricks to hopefully help you. This will deal with reverse engineering a cube to get drill through functionality.

First why would you want to reverse engineer a cube?

  • Cube already exists
    • Want to add drill through capabilities
    • Want to start migrating to Studio
    • Want to have hierarchies available for building other cubes

So you can learn from my mistakes, I’ll discuss the wrong way to try to reverse engineer a cube.  I had a client that wanted to do this. I had recommended extracting the hierarchies from their existing cube and loading it into dimension tables. They wanted to try a different route.  Their target table had all of the level zero dimension members in the fact table. They wanted to see if we could just build the level zero members (since they would only drill through at level zero) from the fact table. The actual data load would be done outside of Studio, so there would be no change in that.

The first thing I tried was to create user defined tables that made fake dimensions tables. I used dummy parent names so it looked like a parent/child build.  I created the hierarchies and the Essbase model. In the Model properties, I told it to ignore shared members so they would not build the new relationship.

I encountered my first problem. I could not create the custom SQL in the drill through report. I fixed this problem by making the fake dimension hierarchies recursive.

Then I ran into my second problem I had a dimension (Scenario) that I created as a manually defined hierarchy. The deploy would not refresh the Essbase cube. So thinking swiftly,  I created it as a user defined table and joined to that “table”. That solved that issue (Or at least I thought it did)

This change  did allow me to deploy the cube, but I could not get the drill through intersections to work in my existing test cube. If I built a new cube with the dummy intersections, the drill through report would work. I figured out it was because of the "ignore share members” It was not actually creating the intersections in the cube it just ignored what I was trying to build.

Bummer. What this meant was I could not build the cube from just level zero members. I would need to have at least level 0 and level 1 members to build the dimension tables.

I reminded the client of my original suggestion on how to reverse engineer. They decided to take my suggestion. Basically we extract all the dimensions using the outline extractor from Applied Olap (you could also do it with ODI or other ways) then load the dimensions into the same database as the fact table exists in.

Once they are there, we can do our joins and normal Essbase Studio steps to update the cube and drill through reports.

This is going to be a busy rest of the year from me. I’ve already spoken at a Hyperion Solutions roadshow with Oracle (losing my voice before the final of my 4 presentations). I’m scheduled to speak at the ODTUG sponsored Sunday Symposium at Oracle Open World, Attend Oracle Ace Director meetings in Redwood city, and speak at 4-5 other events in the remainder of the year. This is in addition to trying to do real work. While I love to share information with you all,  my first love is being a technical resource and solving problems in the Essbase/Hyperion world. I do as much of that as I can. 

I’ll try to be more frequent in my blog posts. I think I might share more from my Studio tips and tricks next or perhaps some things from my Thinking outside the box optimizations session from Kscope. That session almost allowed me to beat out Edward Roske for best conference speaker. I bare no hard feelings toward Edward, He deserved to win the award, but I gave him a run for the money (ok wooden kaleidoscope).  Funny both our presentations were on optimization.

Till next time

Tuesday, May 7, 2013

Humor for the day

While most of my posts are technical or Hyperion related, I think this one is informational as well, but in a different way. I got this in an email this morning from Alaska Airlines and I think they are trying to tell us something. What do you think?


Wednesday, May 1, 2013

I have issues

Were I Cameron Lackpour, I would call these stupid pet tricks, but since I’m not, I’ll say they are issues I’ve encountered. Luckily I’ve resolved them so perhaps I can save you the pain I went through.

The first issue I encountered when I tried to use a Custom Defined Function(CDF) that runs a SQL statement or stored procedure from within a calc script. This function was written by Toufic Walkim(Thanks you) and was given to me a while ago. I’ve used this a few times at different clients but on older versions of Essbase. In trying to get it to run on I encountered a number of issues.

First, It could not find the correct ODBCJDBC driver. That was resolved by downloading the driver from Microsoft and changing the properties file to point to it (or so I thought).  Turns out there are two drivers downloaded. ODBCJDBC.DLL and ODBCJDBC4.DLL. After experimentation, I had put the  ODBCJDBC.DLL in the UDF directory and got an error that basically said, I need to use ODBCJDBC4.DLL. Adding it to the directory did not solve the issue even if I removed the ODBCJDBC.DLL. So thinking swiftly (Ok, I was pretty slow) , I renamed ODBCJDBC4.DLL to be ODBCJDBC.DLL. VoilĂ , now it recognized the driver and knew it was the correct one.

My next issue was that once connected, even if trying to run a simple SQL delete statement, the calc script would hang and I would have to kill the process. Thanks to help from Robb Salzmann narrowing  the issue down  , I was able to Google a few things and found a bug published by Sun that basically says the version of the JDK installed with will hang on connections. I found a later version of the JDK (jdk160_43 to be exact), installed it in the Oracle\middleware directory and pointed the JVMModulelocation parameter in the Essbase.cfg to use it. Now my life is good and the CDF works fine. I did need to remember to bounce Essbase and it took me a while to remember what I needed to do to get Essbase to run in foreground so I could see the messages in the application window (But that is another story)

My next opportunity was with Essbase Studio( I was trying to build dimensions and got an error “Cannot get async process state”. I started investigating and found the errors were all with my Entity dimension. If I built without that dimension, everything worked fine.

I should mention I’m not the only one working on the model. My client has SQL developers working to create views and add content. So I looked further and did a refresh of the Entity View. Imagine my surprise when I found there were columns removed from the view I was using in one of my alias tables.The Studio table refresh would not let me update the view since it knew something was being used that it was trying to remove. I tried having the column added back to the view, but still could not get they refresh working. So I went through my Essbase  model properties and removed the alias table the column was in, then went into the Alias table and removed the column from there as well. I was now able to refresh the view with the column changes. Moral of the story, if you get the message, see if your data source changed.

I’ve been reading the blogs and readmes for and like the new features added. While Essbase Studio really only got bug fixes and Essbase itself only got a few new changes, I like what I see and can’t wait to try ASO planning.

Wednesday, February 13, 2013

InterRel News and news from down under

Typically I don’t blog about the company I work for, devoting my time to more technical articles, but I decided to deviate a little today to talk about a couple of things.

First, after a long time interRel has a new website. It looks pretty nice.Check it out at  www.interrel.com. I think it is nicer looking and more informative than our old site.

Second, interRel is hiring. If you have experience in the Hyperion line of products or OBIEE and are looking for a cool consulting firm to work for, we could be interested in you. interRel is a great company to work for and we believe in consultant growth and training as importantly as we believe in customer service. I’m not going to give you the marketing on why you should join interRel, you probably already know. If you have an interest in talking to us, email info@interrel.com

finally , some news not associated to interRel in any way.  If you plan to be in New Zealand or Australia next month, why not attend ODTUG’s Seriously Practical Conference in Melbourne March 21st and 22nd or the  NZOUG conference on March 18th and 19th at Te Papa, Wellington (sounds like a kind of steak to me). Sounds like a great way to get a paid vacation, Go to the conference and see the countryside. This is truly not associated with interRel as we will will not be speaking at either conference, but my friend, mmip, Cameron Lackpour put together the EPM agenda for both conferences and will be speaking there. If you are there, tell him Glenn said to say Hi. That is the secret phrase and he might have a present for you (not really, It would just be fun).

Friday, February 8, 2013

Smart View Compatibility

I have been speaking at a lot of conferences and client events over the last year touting the great new features of the new version of Smart View. As a matter of fact, I’m repeating the session on Feb 16th and 18th in  interRel Webinars. (contact dwhite@interrel.com  for more info) The questions I often get are “What do I need to upgrade to get functionality” and “If I just upgrade Smart View what functionality do I still get?” 

Before I answer thos questions I’ll first answer “What version of Smart View should you upgrade to?” Since Smart View is pretty backward compatible I would upgrade to the latest version This is a patch that was released last Monday and includes connectivity to OBIEE or as many Oracle people now call it BIFS (Business Intelligence Foundation Suite). Note, you must be on OBIEE for the connectivity to work. There are a number of other enhancements and bug fixes to this version.

Next, what you you have to upgrade to get full connectivity? Well, for Smart View itself, you should be on (or higher) as well as APS and Essbase or higher. If you are going the patch route or then I would recommend you get the Smart View patch and APS and Essbase patches Of course , you are even better if you upgrade to

Finally, If you just upgrade Smart View without upgrading APS or Essbase, the functionality you can expect to get (Thank you Smart View development team for this list) is:

  • Ribbon specific to Provider
  • Re-designed Options Dialog
  • Smart View Panel
  • Sheet level Options
  • Retain Excel Formatting
  • Improved Function Builder
  • Fix Links (Removes path references in cells with Smart View functions) Before: “=D:\Oracle\Smartview\bin\hstbar.xla’HSGETVALUE(…..
  • After: =HSGETVALUE(...
  • Performance Improvements

So while you won’t get cool things like multiple grids on a sheet or member name and alias, you get a few perks. The sheet level options are one thing that was missing in prior versions of Smart View that I’m glad got put in.

If you upgrade to at least you also get an awesome new Smart Query tool that enables you to create extremely complex queries that return sets of members/numbers which can be combined and saved. It is fantastic feature that is not getting the press it deserves. In a future post, I‘ll go through a detailed example so you can see its power.