Why I created a blog

Its been many years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.


Tuesday, December 23, 2014

FixParallel–How fast is fast?

I have finally been able to use FixParallel introduced in 11.1.2.3.500 on an Exalytics server. I’ve used it for for calculations and dataexports, so how fast is it and does it really make a difference?

For my allocations calculations, I really can’t tell you how much of a difference it made, but I know it was a lot faster to do my allocations with FixParalled than without it. I just didn’t capture the times.

For my DataExport, I was able to measure the difference.  I was exporting 1083702 level 0 blocks in column format with a block size of 9984b. I created a Dataexport calc script and set CalcParallel to16 in the script. Running it took 336.95 seconds. I thought that was reasonable, but I wanted better.

I changed the script to use FixParallel using 16 threads across my location dimension which has abut 800 members. The calculation took 9.94 seconds. If I multiply out that number by 16 I come up with 159.04 seconds so it it telling me the FixParrallel calculation is improving performance more than just the parallelization of the script.

What I did not expect is; just like ParallelExport, the FixParallel dataexport created a file for each thread, so instead of one file I ended up with 15. They were named with a suffix of _T? where ? was a number between 1 and 15.  (not sure why I didn’t have 16 files).  I also don’t know what would happen f the file size spanned 2 gig. Would it append a _1 to the file name? I tried reducing the number of threads  to 3 and reran the script. Alas, I ended up with only three files so I can’t give you an answer. But interestingly the script took 690.63 seconds, much longer that the script without FixParallel, so apparently there it tuning we can do to the script.  I could try including another dimension in my FixParallel, but am happy with my less than 10 second export. Perhaps a test for another day.

So is FixParallel worth it, my testing says YES! FixParallel for me was an awesome new feature and one I will use often.

Thursday, December 18, 2014

Essbase Config file changes

Throughout time, I have come up with a list of Essbase configuration file settings that I typically use in my implementations. As the versions come out, new settings are added and it appears in the 11.1.2.3.5X versions a bunch of new setting are there, or at least I now noticed them.

Some of the new settings I’ll be adding to my list are

DIMBUILDERRORLIMIT – signifies the number of error rows during a dimension build. This is similar to the DATALOADERRORLIMIT Both has a maximum of 65K rows

ENABLERTSVLOGGING – Logs Run time variable usage in log files

ESTIMATEDHASHSIZE – Specifies in millions the number of member name and aliases loaded in memory for outline maintenance. While I don’t know for sure, I think this is meant to allow up to open really big ASO outlines.

ENABLESWITCHTOBACKUPFILE – enables the automated switching to the backup security log file if the Essbase.sec file gets corrupted. Looks good for automated recovery.

SSINVALIDTEXTDETECTION – Controls if an error is shown when a user enter in text that could cause Essbase to misinterpret the the data. Especially useful for asymmetrical grids.

 

There are a lot more settings that have been added over time. Take time to go back through the tech reference and read each setting. Some have changed like SSPROCROWLIMIT) other have been removed and many more added. Stay current and adjust your systems accordingly. Remember, many settings require a Essbase server restart to take effect.

Sunday, October 12, 2014

I’m sure you are better than I am!

Have you sat through a conference session and thought “I can do a better job than the presenter” (most likely me) or thought, it would be really cool to talk about (insert cool  thing you did here).  Here is your chance, submissions for the Kscope conference are due by October 15th. Click Here to enter your submission.

If you are a little concerned about speaking.then partner with your favorite consulting firm on the presentation.  Only the primary presenter gets the free pass, but the consulting firm would most likely be willing to speak with you anyway.  Just make sure you include them as the secondary speaker in the abstract or they won’t be allowed to speak. This is a great way to give back for all the help you received along your learning path.

People are really interested in what you have to share. Give it a try, it won’t hurt

Friday, August 22, 2014

Is it really fame and fortune?

Fame and fortune and everything that goes with it, I thank you all!

I was reminded about this line from a Queen song as I returned from a speaking tour in Latin America through Oracle Technology Network and the Oracle ACE program.  I travelled to 6 countries in 15 days and may or may not blog about it later. Back to the song; I was asked a few times about how much Oracle paid me to do the tour and was I an Oracle employee. While Oracle paid for the airline and hotel, I paid for food and incidentals and no I don’t work for Oracle. In reality, with lost income and expenses, it costs me a lot to do the tour.

So why did I do it?  Is it for fame as the song says? While I found it funny that participants in other countries wanted to take pictures with me and others on the tour, I didn’t do it for notoriety. It is not fame or infamy that drives me to share my limited knowledge.  As I explained to one person, through the years working in technology, others helped me along the way. They patiently answered my questions, suggested solutions, shared their knowledge and gave me encouragement.  Like most others in the Oracle ACE Director program, I am giving back to the community that helped me. The current overused phrase is paying it forward but this fits to well in this case to use any other phrase. 

My good friend Cameron Lackpour spends too many hours doing the same, researching, giving his knowledge for free for the betterment of the community, all at a great personal loss as much of what he does is non-billable.  Why do I bring this up? I is certainly not for a pat on the back for your pity (for me or MMIP, Cameron), but to urge you to get involved in the same manner.

It sounds like a cliché, but volunteer work is an incredibly rewarding activity. In part volunteering is giving back to the community that helped you get where you are professionally (and even personally) – call it paying forward, or paying your psychic debt, or just helping others as you have been helped. But there’s a deeper aspect to volunteer work as well. Humans are imperfect moral beings, but one of our better drives is to Do Good Things. It just feels good to be good. Try it, you’ll like it.

I know what you are going to say “I don’t know as much as Cameron does or Glenn pretends to (, so I can’t help”. To this I say poppycock. Firstly you probably know more than you think you do and can help others with their questions. Secondly, even if it is true that your knowledge is limited, you can still help. Get involved with your local user group, on-line community, favorite conference, or write a blog about your experiences, trial and tribulations. If you read my blog, you are part of the Hyperion EPM community, are stalking me, or are weird. The Hyperion EPM community is a growing living entity that only gets better by sharing.  I know Cameron is looking for people to help with the ODTUG EPM community that just started up. Become involved, if not with that then with something. Pay it forward and have a personal satisfaction that you helped and I will thank you all!

 

post script. I communicated with Cameron about this post since I mention him heavily in It and got this response:

This is Cameron aka MMIP. If you are interested in getting involved with ODTUG’s EPM community, I encourage you to sign up at ODTUG’s volunteer page: http://www.odtug.com/volunteer

We have many exciting initiatives including:

· Local meetups

· Content sourcing for:

· Webinars

· ODTUG Technical Journal articles

· EPM Newsletter articles

They are starting up and need volunteers to make them happen. This is your chance to define the future of the EPM community.

To Glenn’s point, don’t be shy about contributing. When I first met Glenn at Kaleidoscope 2008, he wasn’t:

· My friend

· An Oracle ACE Director (only an ACE)

· An Oralce EPM community rock star

· A trusted advisor and voice to many

But he is today, in large part because of his endless and valuable volunteer work. I’ve done my best to emulate him and the results have been very rewarding. The same can be true for you. I look forward to talking to you on an EPM community initiative conference call and look even more forward to the great work you’ll do.

Be seeing you.

Sunday, August 3, 2014

OBIEE and Essbase a few observations

I’m sitting in a hotel room in Panama writing this post. Why you might ask. I’m in Panama as part of an Oracle Technology Network (OTN) Latin America tour. I will be going to 6 countries in 15 days.. Perhaps at the end I’ll blog in detail about it.  As I write, Edward Roske is in Brazil doing the same thing. Why am I writing this instead of out exploring? I will be later today as the local host arranged a tour for the visiting Oracle Ace Directors.

This post on OBIEE and Essbase is based on my creation of my KScope 14 presentation and quirks I noticed Note I was using Essbase 1112.3.5 and the latest patched version of OBIEE 11.1.1.7.1.X (I don’t remember the X). I was using a VM instance of the Sample Oracle supplies and was using Essbase Studio with it to create the cube. I then brought the cube back into OBIEE and also into Publisher to create a Mobile dashboard.

My initial problems came trying to install the BI Admin console on ly laptop. I run Windows 8 (not by choice) and IE 11. BI admin would not install no matter what I tried. I also had problems with Enterprise Manager and Analysis. I worked around these by using remote desktop to an environment that had lower versions but only after spending hours trying to make these applications work.

After I got things working I created a relational schema in an ORacle instance and tried to import the DLL and sample data from Essbase Studio. The tables created with no problems and created the keys and indexes as needed. When I tried to load the data, I got errors The import did not account for transforming dates into the correct format. I added cast statements to each row needing it and it worked fine, but I should not have had to do it.

Next, I brought in relational tables into OBIEE for creating my Essbase cube through Studio.  That was very easy until I started talking to Wayne Van Sluys a great OBIEE resource. He wanted me to create all sorts of logical tables and aggregations in my business mapping layer to turn my snowflake schema into a star schema and allow aggregation summaries.  If I had wanted to use the data source to do anything but feed Essbase Studio, (report from the source, do federated Essbase/relational reports) I would have had to go this work which seemed very daunting. Luckily, I only wanted to feed Essbase Studio, so it was easy. Just bring in the source, move it to the business model and presentation layers and I was done.

Next, in Essbase Studio, when I brought in the OBIEE tables, Studio did most of the work for me. In the Minischema, Studio would not let me do joins between the tables (This was defined in the relational source) but it would and needed me to set up the self join for the parent/child accounts table. It was easy-peasy (as Cameron Lackpour would say) to create the hierarchies and build the Essbase cube from this. Of course the sample tables were set up with Essbase in mind

I brought the completed cube back into the OBIEE RPD just because I could. I actually did not use it there. I just wanted to show for consistency of reporting how to do it. there were a few little quirks. I had multiple alias tables and although I only asked that the default table be used, When I got to the presentation layer all of the aliases showed up. I did remove them easily enough, but it was extra steps. Also for accounts,

I had set it up a a value dimension so I would not worry about if additional generations were added. In the presentation layer, I changed the ugly wording of “gen7,accounts” to “All Accounts” as it is more meaningful to the end users.

Working with Mobile App Designer (MAD)proved to be more of a challenge than I thought it would be based on the demos I’ve seen. It could be a distinct possibility that I don’t know enough about it, but the multiple steps were difficult. For those of you how don’t know, MAD bypasses the RPD and uses queries from BI Publisher.  My first issue was trying to create the MDX query. The Query designer gave me problem after problem. I finally bypassed it, wrote my query in EAS , tested it and pasted it into the the query editor in Publisher. I found I  had to do very simple queries as complex queries with multiple row or column members would not work in subsequent steps.

Once I had the query in place, I went into the actual query designer). It is a simple drag and drop interface and I got the basics pretty quickly. Because the input query, you are limited in what you can display. I will say I spent hours trying to get a single graph with three dimensions represented, Products, Periods and measures. I could never figure out how to do it, so I ended up with a graph for each quarter. It would have been more impressive if I did figure it out. I ended up going back to the MDX and making it simpler after trying multiple different things. Just before KScope, a new version of MAD came out, but I’ve not had a chance to see if it would have been solved my problems.

I know this is a long post without pictures, but I thought you might be interested in my ramblings about using OBIEE and Essbase for my demo. Using OBIEE against Essbase is much easier than using it against relational as I found out and I was able to complete the demo it a fairly short amount of time even if I did have issues. If you are going to go down this path, don’t travel alone, bring along a friend who knows OBIEE to help you with the obstacles along the way, you will be happier for it.

Wednesday, June 18, 2014

KScope Deep Dive input needed

As the time for KScope draws near, I’ve been busy revising my presentations and getting ready to go. Ill be there starting Saturday afternoon. I doubt I’ll get any sleep until next Thursday.

Speaking of Thursday. This year at KScope, something new is being tried. The different tracks will have “Deep Dive” sessions Thursday morning that are 2 hours long. These will be well worth sticking around for. For the Essbase track, a panel like no other will happen. Why do I say like no other, well, Some of the greatest minds in the Essbase world will be on the panel including

  • Carol Crider – Senior Technical Support Specialist, Oracle
  • Steve Liebermensch – Essbase Product Manager, Oracle
  • Mark Rittman – Ace Director, Rittman Mead
  • Tim Tow – Ace Director, Applied OLAP
  • Sarah Zumbrum – Ace Associate, Finit Solutions

Either Edward Roske or I will also be on the panel. Edward my have to sit in instead of me because whenever I get with Carol Crider I get tongue tied. She is the internals support guru that we all go to when we need real help (No Cameron, she dies not have a degree in psychology). Steve Liebermensch knows more about Essbase features and functionality than anyone else I know. Tim is an expert on the Java API and Mark know more about OBIEE than anyone lese I know.  Why I was included on the panel, I have no idea with this esteemed group.

That said, without questions, the group will be sitting there with nothing to say. WE need you to submit questions for the group to answer. Please don’t ask. things like “In my cube I have 29 dimensions and it takes 12 minutes longer to calculate than it did two years ago. How do I optimize it” That is way to specific. But if you asked about the reasons calculations might take longer than they used to. that might be able to be answered.

Your questions can be submitted in a couple of ways.  First, by tweeting with the hashtag #EssbaseDeepDive, or at @CubeCoderDotCom (throwing in the #Kscope14 hashtag will help others to see your question, too).  Alternatively, by email to EssbaseDeepDive@gmail.com.

I look forward to your questions and to meeting many of you at KScope 

(Note, even if you are not going and have a thought provoking question, go ahead and submit it. )

Tuesday, May 13, 2014

Don’t always believe what you read

I was helping another consultant with a calc script as they were getting incorrect results. They wanted to do a @sumrange for a particular intersection of data..  They had coded the statement like :

@sumrange(actual,"no product","Final",.... 

I  asked why they didn't use a cross dimensional operator. They referred me to the tech reference:

SNAGHTML2a2beef8

 

For those of you who can’t read it, the note says

“Member name cannot be a cross-dimensional member combination.”

  Having heard that, I looked for another solution and offered a couple of ideas but I kept thinking about cross dimensional operators. I could swear I had done that before. I asked the consultant to humor me and set the @sumrange to use a cross dim instead of how they had coded it. Amazingly the code validated and ran and actually gave the correct set of numbers.

The moral of the story,  the tech reference is not always right. Even  when things are in print, question them and experiment. This typo cost the consultant hours trying to figure out why the calc didn't work. Think outside the box and experiment

Monday, March 24, 2014

Another post on 11.1.2.3.500

It seems like everyone and their brother (and Cameron Lackpour, the younger brother I NEVER had and NEVER wanted) has jumped in on relating the cool things the latest Essbase patch has to offer. So as not to be left out in the cold, I thought I make a few comments as well.

I think this new patch is a real game changer with a lot of cool features. It will have many of us throwing out old optimization techniques and coming up with a whole new set. We will really have to think out side the box to figure out what is best.

That said, There are a few things you might want to consider. First, although this is listed as 11.1.2.3.500, it has significant changes in it. Why Oracle has such significant changes in something that is just a patch, I don’t know, but it certainly better than waiting for a full release of the product. One supposes you could install the patch to get the features like enhanced aggregation for MDX functions and bug fixes, I would test very carefully before using some of the other new functionality without extensive testing.

Next, I would like to go into a little detail on a couple of the new features. In general, the word of caution I would give is to test extensively if you are using any of the new features. While they can give you significant gains in performance, they can cause you some issues.

Fix Parallel

The idea behind fix parallel, is there are many situations where calc parallel goes into serial mode and we as developers know better. We can use Fix parallel to force the calculations into a parallel mode. This implies that we actually know what we are doing and that there will be no conflicts if we go into this mode. During the beta testing, it was determined that Fix Parallel is not as fast as calc parallel in most cases, but is faster than not calculating in parallel mode at all.

Hybrid Mode

We have all been drooling over this idea since it was revealed at Kscope last summer. The poser of a BSO cube with the aggregation speed of an ASO cube. How this is implemented is you take your sparse dimensions and make them dynamic . In addition, you add a parameter to the Essbase.cfg file to urn this feature on. During the beta, TimG tested hundreds of queries against hybrid mode and most performed very well.

This is a huge game changer, or at least it will be. I say that because this initial implementation is limited. There are a lot of things that will cause the cube to revert to BSO mode. Using Dynamic Time series (DTS), cross dimensional operators in formulas, some very common functions in formulas (a list too long to list here), and attribute dimensions forces the cube into BSO mode.  Frankly, while I think this feature is fantastic, currently it has a limited use case and until some of limitations are removed, tread lightly. Of course if your cube is doing simple aggregation, then go for it an gain the benefits.

Exalytics – writing blocks back to the same location

As Cameron mentions in his blog post, many though this already occurred, and to a certain extent it does. While blocks are written to a new location, Essbase will look at the free spaces to see if a block can fit into a spot vacated by another block, my guess it seldom happens.  Having blocks rewritten to the same location can reduce fragmentation a lot. I’m guessing this is Exalytics only right now because most data in Exalytics is actually in Flash memory or on solid state disk. This is just a guess on my part, but from testing the effect of fragmentation on BSO cubes in the past, I can say  heavily calculated cube (like planning) will have vast improvement in speed without having the constant maintenance of defragging the DB.

 

As I said in the beginning of this post, this release is a real game changer. Oracle development should be commended and applauded for thinking outside the box and leapfrogging to this level of functionality.  I can’t wait to see what improvements are on the horizon. See I’m never happy with what I get, I always want more. Gaby Ruben told me at Kscope, his job is to make us all rewrite or optimization presentations every year or two.  I think he is keeping his word. We all need to re-examine how we optimize given these fantastic changes to the product.

Thursday, February 27, 2014

Undocumented Change to Smart View 11.1.2.5

As we all know (or at least the cool kids know) the changes to Smart View have been coming quickly. So fast that the wise at Oracle decided to decouple its releases from the rest of the EPM stack. That is how we are at 11.1.2.5.200 while the rest of the stack is 11.1.2.3. I think this is a good thing as it allows Smart View to be more proactive in introducing changes to help us do more and better reporting.  I applaud Oracle for doing this.

However, because the changes are coming so quickly, not everything gets documented very well. From the 11.1.2.5 new features documents we see this.

“Change to Display of Duplicate Variable Names

With this release, Smart View added functionality to display fully qualified variable names when variable names are duplicated. This helps to identify variables defined at the global, application, and database levels. “

But what does it mean? One of the cool features of Smart View is the ability to use substitution variables. You are thinking, what is so cool about that, the add-in could always do it and Smart View could do it since version 9X. Well yes, in both cases you could enter your substitution variable like &CurrMonth and when you retrieved, you would get the value of the substitution variable returned when you refresh the data. With this comes the limitation that if you save the spreadsheet, you save the actual value of the substitution variable and not the substitution variable itself. Huh? What I mean is suppose the substitution variable is set to Mar. It saves Mar and not &CurrMonth. 

Starting in Smart View 11.1.2.1.102 a new way to use variables was introduced. HSGETVARIABLE. When used in a worksheet, it would retrieve the value of the variable but keep it as a formulaic member so next month when you changed the variable, your report would update with the new information. Pretty cool!

So lets get on to the change. Suppose I have the following substitution variables:

image

Notice I have both a global and database specific variable with the same name.

In 11.1.2.3.X and below, I could enter

=HSGetVariable(“HSActive”,”CurrMonth”) and when refreshed, get the variable. Note, HSActive means the current active connection for the sheet, I could also put in a private connection name.

image

Starting in 11.1.2.5, the command allows you to put in a qualified name

=HSGetVariable(“HSACTIVE”,”Sample.CurrMonth”) to determine the scope of where to get the variable from. If left blank, it seems to get the global variable. If qualified, it picks up the variable from where you specify

image

The screenshot above shows what the formulas look like, but in reality when you enter them you get is

image

Once you refresh you get

image

The difference between rows 1 and 2 is Row one has been physically been changed to Mar while row 2 is still a formula.  The interesting thing is unless you qualify the application, you will get the global variable. 

Further, I’m using HSACTIVE for the connection name, you can actually use a different connection instead. For example if I created a private connection for Demo.Basic called DEMO, I could use it and it would pull from the Demo.Basic version of the variable even if I’m connected to Sample.Basic.

So here is where it is getting more interesting. In 11.1.2.5.200 I tried the same thing. When trying to use a global variable

=HsGetVariable("HSActive","currmth")

I get an error message

image

and what did not work before supplying both the application and database now does work

=HsGetVariable("HSActive","sample.basic.currmth")

As a test, I deleted the application and database level variables and then tried:

=HsGetVariable("HSActive","currmth")

and now it returns the global variable.

By the way &Currmth stopped working as well. 

Between the two versions Oracle development has apparently been refining how this functionality works. So what worked in 11.1.2.5 does not necessarily work the same way in 11.1.2.5.200 and of course different than prior versions.

Anyway this is a good enhancement to substitution variables and I urge you to give it a try.

Tuesday, February 11, 2014

Formatted columns in Essbase

Am I crazy(Yes)! Formatted columns in Essbase? What am I talking about, we know you format your data in the front end, why would I do it in Essbase.  That is a good question and brings up the topic of this post.  This is one of those items I put into the category of “Little used features of Essbase” an ever evolving presentation I give at various events.  What I am talking about is Format strings that became available with Essbase 11.1. Lots of people jumped on the Text and Date measures bandwagon and that are in use a lot now but few if any have implemented format strings and they can be very useful.

How about taking a date stored in Essbase and returning it as a formatted date in the format you want, or taking a numerical value and returning it as text. Wait you say, I can do that in a text list. Well you can sort of, but formatted text gives you more flexibility. For example , I can tell the format string that if the value of a column is between 0 and 28.5 then return the text “Bad”, if the value is greater than 28.5 and less than 80.3 return “Good” and if it is greater than that return “Great”. Text lists have distinct integer values and can’t do that without some manipulation.

Using one of the examples in the Tech reference

 http://docs.oracle.com/cd/E12825_01/epm.111/esb_techref/frameset.htm?mdx_cellvalue.htm

I took Sample.Basic and enabled Typed Measures

image

I then went into the Variance % measure and added the following format string:

MdxFormat(
CASE
    WHEN CellValue() <= 5 THEN  "Low"
    WHEN CellValue() <= 10 THEN "Medium"
    WHEN CellValue() <= 15 THEN  "High"
    ELSE  "Very High"
END
)

Note, The example in the web page has the quotes as intelligent quotes and you have to change them back to regular quotes or you will get an error something like “Error on line 3 unknown name ?”

So what does the output look like? For help in checking the values, I added a member named Variance % Unformatted.  You can see I now have text in my report that will change as the data does and does not require the results to be integer values.

image

There are a lot of possible uses for this to create more customized reporting.  I should note that this is only possible with Smart View as the Add-in does not support the text output.

Now that I have expanded your horizons, explore the possibilities

Tuesday, January 21, 2014

Smart View Member Misnomer

Believe it or not, I  will actually be updating my blog more frequently in the future. I’ve gotten 3 articles each half written and will be finishing them soon (I hope)! Two of them are on Formatted columns and an undocumented change to Smart view behavior. look for them soon.

But in the meantime, just to prove I’m not dead, Here is  a little tidbit that I have been asked too many times.  In Smart View connecting to Essbase, you have the options for member display of “Member Name Only” and “Member Name and Alias”  (For now I’m going to ignore qualified member names)

This confuses people as when they select Member Name Only, they see the Alias. What this setting really means is “Member Name OR Alias depending what is selected in Alias”. If you select None, you get the member name if you select an Alias table you get an Alias. Simple isn’t it.

During the beta long ago, I tried to get them to change the wording, but Oracle could not come up with anything meaningful that fit into the selection box so we are stuck with what it is.