Why I created a blog

Its been four years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Thursday, June 2, 2016

All Dimensions in DataExport rows, Is it possible?


Since I’m getting ready for KScope, I thought I would throw a quick bit your way about the Dataexport calc script function. This question is asked a lot in various forms. Today the question on OTN was:

“I am doing a data export in a calc script. I have 12 dimensions in the cube. The output file is coming out fine except for one issue. The file does not write the name of the period member for which the data is being extracted. It writes the members of 11 dimensions but does not write the period.

If I fix on 1 month it will write a total of 12 columns 11 dimensions + 1 data column.
What I want is 13 columns (12 dimensions + 1 data column)


Am I missing something or is this how it is supposed to be?

Period is a dense dimension, The following are my export options:

SET DATAEXPORTOPTIONS

{DataExportDynamicCalc On;

DataExportDecimal 12;

DataExportRelationalFile On;

DataExportOverwriteFile ON;

};”

 If you want to see the thread on OTN, here it is:
My answer is the way DataExport works is to talk all of the dimensions except one dense dimension and make them row members, the final dimension, just like a grid retrieval is a column dimension. In the case of a DataExport the columns do not have a header.

In the Poster’s case, Periods was being set as the columns, but that is not necessarily always going to be the case. As one response suggested the poster could add:

DataExportColHeader Period;

That would insure that periods is being used as the columns, but would not give the poster what he wanted. In a 12 dimension cube, we will only get 11 row dimensions no matter what. The only way I know to get what the poster wants is to add a dense dimension to the cube. This dimension could be a single member dimension (just the dimension name). Since it is a single member it would not take any additional space or increase calculation or retrieval times. We could then set that dimension to be the column dimension and we would get all of our “real” dimensions as row members. If we can’t alter the existing cube, we could clone it add the dimension to the cloned cube, export (hopefully) only level 0 data, load it into the new cube and then use DataExport there to get the format of what we need. Granted that is extra work, but sometimes you do what you have to.

As a side note, you can make that new single member dimension useful Call it something like last_update and in your batch process, set the alias to be the data/time the cube was last updated.

Another possible solution would be to export all 12 periods every time so you always know what the columns are. If we need to really only get certain periods, we could do the DataExport to a relational table, export a sub variable and join to get the period we want.

I’ve seen similar questions about ordering the output and ignoring certain columns. Neither can be done. Dataexport, is fast because to grabs the blocks and outputs them. It does it in the order the dimensions exist in the outline (sparse first, then dense) just like an export. So be careful if you switch the order of dimensions or change something from dense to sparse or sparse to dense, the order in your DataExport will change. Because it is pulling the block, it has to write all dimensions.

Monday, April 4, 2016

Undocumented SQL Data Load Rule Change

Recently I had a client do an in place upgrade of Essbase from 11.1.2.2 to 11.1.2.4. The only thing that was upgrades was the EPM stack, The server and SQL server instance were not changed. Everything seemed to go smoothly except for one thing. We had a process where data is extracted from Essbase into SQL Server some data manipulation is done and the data is loaded back to Essbase. This worked great in 11.1.2.2 but in 11.1.2.4 the numbers in my SQL server table didn’t match what was being loaded into Essbase. Hmmm. What do I mean?

Well, I created this test sample for Oracle support. First I gave the then a table definition

CREATE TABLE [dbo].[test](

[Process] [nvarchar](50) NULL,

[Organization] [nvarchar](50) NULL,

[Produt] [varchar](50) NOT NULL,

[F_Code] [varchar](50) NOT NULL,

[Accounts] [varchar](50) NOT NULL,

[amt] [float] NOT NULL,

) ON [PRIMARY]

Next sql to populate the table with some test values.

insert into Test

values('F1','00N18','CP.T354','F.21N1','E795000',1467149.93)

insert into Test

values('F1','00N18','CP.T354','F.21N2','E795000',1467149.93)

insert into Test

values('F1','00N18','CP.T354','F.21N3','E795000',1467149.93)

insert into Test

values('F1','00N18','CP.T354','F.21N4','E795000',1467149.93)

insert into Test

values('F1','00N18','CP.T354','F.21N5','E795000',1467149.93)

insert into Test

values('F1','00N18','CP.T354','F.21N6','E795000',1467149.93)

insert into Test

values('F1','00N18','CP.T354','F.21N7','E795000',1467149.93)

When I retrieve from the table I showed them I got the following results:

clip_image001

Notice the last column amt matches what was loaded. Next I created a load rule and put in the following SQL statement

* From test

Since this was just a test, I didn’t bother to explicitly list the column names, but usually do and recommend doing that.

I retrieved the data and got:

clip_image002

I don’t know about you, but to me 1467149.93 does not match 1467149.875. If my math is correct it is off by .055. This would typically be no big deal, but these numbers were used in allocations and put the results out of balance by anywhere from 1 to 30 cents per allocation and the differences added up across the enterprise.

The response I got from Oracle surprised me. They told me it was not Essbase but the way SQL Server handled Floating point conversions. Huh? Sql Server had not changed in my environment and worked before. I wasn’t buying their response and rejected it completely. So they went back to development. I kept getting stalled with the similar answers about it not being Essbase but SQL Server and they kept pointing me to web links talking about floating point math.

https://msdn.microsoft.com/en-us/library/ms173773(v=sql.90).aspx https://msdn.microsoft.com/en-us/library/ms173773(v=sql.100).aspx https://msdn.microsoft.com/en-us/library/ms173773(v=sql.110).aspx.

Ok, I get it, I understand floating point math and how it is an approximation, but When SQL shows we the correct answer and Essbase load rules don’t, how can it be a SQL Server issue. I used the analogy with support that I’ve looked at a painting for years and it was clear as crystal. When I got a new pair of glasses, the painting got fuzzy, so that means the fault is in the painting and was not caused by perhaps an incorrect new prescription?

After literally months of going back and forth, I finally got an answer, I didn’t like it, but at least it made sense. I was told:

“A change was made in PS4 that explains the difference between the versions. Previously, all values were binded to SQL_C_CHAR type and the drivers were responsible of doing the conversion while displaying the values. In PS4 the values get binded according to their type (i.e. float is getting
binded to SQL_C_FLOAT). Therefore, the value is converted accordingly by Essbase and not by the driver. This behavior is considered correct.
You can see that the value is indeed stored correctly using a workaround
while creating the rule file: instead of selecting the amt column as is, use str(amt, 25, 2).
The syntax is: STR ( float_expression [ , length [ , decimal ] ] ). “

So now I have to go back and re-code all the SQL to convert the data type. It would have been nice if Oracle documented the change or even better, set an option to use the old coding. But at least I have an answer and so do you in case you are reading this because you did a google search because you are having a problem.

Wednesday, March 16, 2016

Don't be a Glenn


Sometimes old errors are caused by different reasons than what originally caused them.  Case in point, I was working on a batch and MaxL script to automate an Essbase Studio deploy command in 11.1.2.3.5X.  I ran the script and got an Error message  “Essmsh exe has stopped working” and the batch file crashed.  Thinking I did nothing wrong, I googled the error message and got multiple references to a document statin this was a bug and to use the 32 bit version of MaxL instead of the 4 bit version. 

This would have been fine except for the fact that I had other deploy commands working fine using the same Essmsh command. In addition, the document was for 11.1.2.1 and was fixed in 11.1.2.2.   I went around and around, changing dimension order as I remembered I had issues in the past with the accounts dimension being first. Nothing I did to the script or the outline mattered.

I spent way too much on this and was getting very frustrated. The process actually did produce a log which I finally looked at and found
Caused by: Cannot end incremental build. Essbase Error(1060053): Outline has errors
\\Outline verification errors:

\\Member Income Stmt Calc has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

\\Member REPORTING_STATS has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

\\Member MISC has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

Hmm, So I went back into the Essbase Model properties and lo and behold the setting for data storage was set to use existing.  I do have a column in my data source for storage type which I forgot to set as an external source. I reran and it worked like a charm.  This made sense as I had set the accounts dimension to rebuild. The accounts would have all been deleted and not re-added properly.

The Moral of the story is twofold. First, When you have an issue don’t assume you did everything right the first time (I didn’t) and second, don’t assume that what you read on google for older versions is going to solve your issue.

Tuesday, December 29, 2015

A Guest Post Reviewing Developing Essbase Applications Hybrid Techniques

OK, I’m just the middleman here. Tracy McMullen asked me to post her review of Developing Essbase Application: Hybrid Techniques and Practices. These are all of her words. I’ve not changed anything in this review except to put quotes around it.

Truly Taking Your Essbase Knowledge to the Next Level
You know how sometimes when a sequel to a movie or good book, comes out… it just stinks compared to the first release? For instance, Dumb and Dumber To or Speed 2… Other movies and books know that releasing a sequel to its classic first release just wouldn’t be the same and they smartly do not follow up their product with a sequel (like Old School or E.T.). Cameron and team’s first edition of Developing Essbase Applications: Advanced Techniques for IT and Finance Professionals was valuable addition to all Essbase administrator’s libraries. So how does their second edition (or “sequel”), Developing Essbase Applications: Hybrid Techniques and Practices fair? This new book follows in the footsteps of sequels like Empire Strikes Back, The Godfather II or The Dark Knight, taking the great things about the first edition and then improving upon them in the following release. They’ve hit another homerun, providing an invaluable tool to the Essbase community.
This Essbase book has something for every type of Essbase consumer, from the super techy Essbase administrator to the IT / infrastructure team supporting Essbase to the brand new Essbase administrator and finally the end user. So buy at least one (or more) for your organization and share with your Essbase stakeholders. The Developing Essbase Applications Hybrid Techniques and Practices writing team (John, Martin, Tim, Cameron, Glenn, Mike, William) has given us this handy toolkit with code examples and reusable code and detailed explanations of simple to complex topics like design best practices, how some of the new features work under the covers, and detailed “how to” steps so that you can follow along. When it comes to testing new features, they’ve done the testing for us.
A few of my favorite parts of the book: John provides general guidance on Essbase server configuration with Exalytics environments and seeing the testing results of just how powerful Exalytics can be. If you are looking to upgrade and purchase new hardware, read the Exalytics chapter! This might give you some of the ammunition to make your case for Exalytics in the purchase debate. I loved Martin’s Magical Number 7 and how it applies to Essbase dimension design. Even experienced admins can benefit from this design best practices chapter. All of the new buzz around hybrid cubes is really exciting! But what is it exactly? Tim and Cameron dissect the new hybrid option for Essbase and share actual performance results. You’ll find some interesting (and surprising) results. Glenn’s chapter on SQL for Essbase is a must read for every Essbase administrator. It helps both the IT developer and the business user understand how SQL can be utilized for loading data into Essbase, extracting data from Essbase back to relational targets. If you’ve ever wanted to load test your Essbase environment, Tim’s chapter will show you the way to accomplishing this tricky task. As of mom of two sibling girls, I completely appreciated the analogy OBIEE and Essbase provided Mike’s chapter, Copernicus was Right. Essbase isn’t the center of the universe? Why symmetry matters? Mike rocks the boat a little in this part of the book but shows how to really address challenges of Essbase and OBIEE integration. If you aren’t familiar with Dodeca, check out Cameron’s chapter on this alternative tool for end users to interact with both Essbase and relational sources. William’s Smart View chapter breaks down all of the different query options available within Smart View (did you even know these options existed?). He provides a super helpful comparison chart then deep dives into the content with examples on the different ways to use Smart View to interact with data.
Developing Essbase Applications Hybrid Techniques and Practices is not just a high level book. This is a roll up your sleeves and jump in the weeds kind of book. There is a LOT of information which can be overwhelming at times (but really that is a good thing). Reread if you need to because the all of the details are there to learn about Essbase. I concur with Steve Liebermensch who wrote the Afterword, add this book to your shopping cart and pay for it already! I’m certain you will learn something new (likely a lot of something new’s) that will help you in your journey with Essbase. “

Monday, November 9, 2015

Essbase 12C is here or is it?

A few weeks ago when I was at Oracle Open World, there was a big to do, partially started my me. Someone had posted that Essbase 12C had been released.  Thinking quickly, I immediately found the readme and saw that EAS and Essbase Studio were not supported. I couldn’t fathom that, going back to command line to  build cubes? and how would load rules get created.
I was hoping that perhaps, it had the new simplified interface that Gabby Rubin has been hinting about (and I actually got to see at OOW  -- It is awesome), but alas no.
It turns out this 12C release is for use with Oracle Business Intelligence Enterprise Edition (OBIEE) 12c that was released the Friday before.  It does have cool things we can expect later in our own EPM version of Essbase.

First it is the new JAVA kernel, so I’m happy to let the OBIEE people debug it for us. This Java is supposed to be first released as EssCloud service. so don’t expect it any time soon in-house. I would say on premise, but apparently Larry is rebranding on premise to be some kind of private cloud, or so the story goes.
Second, It supports all of the functions including cross-dim in hybrid mode. That is huge. It means they figured out cross dim issue performance. I can’t wait to try it.

Finally, They are putting Essbase in memory. The real use case for Essbase with OBIEE is for OBIEE to spin off Essbase cubes for caching data to make OBIEE reports faster. we won’t be able to get at these cubes, but OBIEE reports will be.
So for OBIEE (at least) a REALLY cool version of Essbase is available, but for us EPM-ers we will have to wait. may uncle Gabby, will give me a version for Hanukkah to play with. or a version for each night. I love new toys. 

Tuesday, July 7, 2015

Exalytics X5-4 Fast and Furious

I love going to KScope because I learn about new features and products. This event was no different. In the Sunday symposiums with Oracle there was a discussion on the new Exalytics X5-4. It was only Sept last year at Open World when the X4-4  was announced . Edward Roske talks about it in his Blog. It was a big deal then. With the introduction of the X5-4 only 9 months later it becomes even bigger, better and “Badder”. With the X5-4 we go to a max of 72 cores, up from 60 and more memory. In addition to more cores, the X5-4 supports a new NvMe High bandwidth flash technology that improves throughput by 2.5 times. I won’t bore you with the details if you want to read about them , here are the specs

To me the most remarkable thing about this is you get more and the price has not increased. All of the way back to the X3-4 the price has remained the same. With a list price of $175K it is what I consider cheap.

As John Booth mentions in his Blog, you can get this as an X5-2 configuration as well offering additional flexibility. Note I had a correction from John. The X5-2 was more a wish from him than a reality. While you could create a X5-2 using sub-capacity licensing, you are still paying for the physical cores (Thanks Steve Libermensch for that clarification)

For us in EPM it keeps getting better and better.

Monday, July 6, 2015

Essbase Studio 11.1.2.4.002 patch

Well, I survived KScope. It was a very good event with participants getting over 175 sessions related to EPM/BI.  I sat in a number of sessions and was impressed with the quality of the speakers and presentations.  I also had the opportunity to speak in 4 sessions and I think they went pretty well, at least from the questions people asked.

Patch 11.1.2.4.002 came out the other day and I read through the readme file. There were only two changes and one document change. 

The first bug fix relates to a problem with stored dimensions (I assume ASO) where it would not let you use external consolidation operators. 

The documentation change fixes the statement that you can drill through on any level of a hierarchy including the top level. That is incorrect, you can’t drill through from the top member of the hierarchy (The dimension name).

The most intersting bug fix is the second one and I’m surprised they are calling it a bug as it used to be described as a limitation. When doing a drill through report on a recursive hierarchy, the drill through would fail with an error message if there were more than 1000 level 0 members returned in the query.  For recursive queries, Essbase Studio created an IN clause with the list of level zero members under the selected member. The 1000 member list was a limitation for Oracle as that is the maximum number of members allowed in an In clause. I’ve not been able to test this yet and wonder how development got around that limitation.

I guess the moral of the story is , even if something is listed as a product limitation, still submit bug and enhancement requests and it is very possible what you need will be changed.