Why I created a blog

Its been four years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Wednesday, August 31, 2016

A glimpse into FCCS

I’ve been an Essbase developer for about 25 years! In that time I’ve also done some Planning and reporting, Essbase Studio, OBIEE, BICS and a few other things, but my first love has been Essbase. So when I was offered a chance to do the very first implementation of Financial Consolidation and Close Cloud Service (FCCS) I was a little shocked, as I’m not a finance guy. We have a client that wanted to be an early adopter so I jumped in without looking.  Since FCCS is built on Essbase I thought how different can it be? It would be fun to find out. Needless to say it is much more different than I expected.  What I have learned so far is there is a learning curve whether you are an “Essbase Guy” or a “HFM Guy”. In this blog, I’m going talk about some of the things I found odd for an Essbase developer.

First of all, If you have done a PBCS or even better an EPBCS implementation, you will be very familiar with the interface, FCCS is built on the EPBCS framework and most of the things look the same. As a matter of fact, the labels often still say Planning instead of FCCS.


First, Most of the dimensions are predefined. There are 11 out of the box dimensions and two custom dimensions. If you choose Multi-GAAP you only get one custom dimension. The predefined dimensions are:

·       Accounts

·       Entity

·       Scenario

·       View

·       Movement

·       Data Source

·       Currency

·       Periods

·       Years

·       Consolidation

·       Intercompany

In all of the dimensions, there are a number of predefined members. Most of the members are prefixed with FCCS_. It is very important that you do not change these member names as calculations and other processes depend on knowing what these members are.

While you can’t change the name of these members, you can add aliases to make them friendlier and more familiar to the users. In addition, you are not stuck with just these members, you can add your own members below or above these members. What I did find weird is for the balance sheet members, we do not set time balance on the members. All members are set to Flow.  The account dimension is the only dimension set to dense so all upper level members are set to Dynamic calc.

Second in the Entity dimension, all of the members in the Hierarchy are set to Non-consolidating, i.e. a tilde ~. I found this to be very strange. Apparently they handle the consolidation in the calc script. By the way you can’t see the calc script. Also, it this release, you shouldn’t add alternate rollups in the entity dimension. The calculation logic uses the hierarchy for eliminations and adding an alternate messes that up. I have been told in some future releases, you will be able to have them.

Next on dimensions, I think they are doing something odd as all non-level zero members need to be tagged as never share.  While in the regular Essbase world it would matter as long as a parent has more than one consolidating child, it apparently matters in FCCS. We inadvertently set the dimension member to store and it affected the consolidation calculation giving us bad results.

I won’t go into how you have to map Balance sheet members to particular movement dimension members to get the Cash flow to work, which is a topic in itself.


For you Essbase nerds out there, who think you can just code anything you like in FCCS, you will sadly disappointed. As mentioned you cannot see let alone modify the built in calc scripts (there are basically two calc scripts (Consolidate, Translate and compute rates) the rest of the rules shown are for built in processing and are not run by a user

Ok, what about building your own rules. Sorry, in this release (and probably many to come) you are not allowed to build your own rules. Because of the complexity of how data interacts, they have locked this down. There is talk of in the future allowing some very specific rules to be written but we will have to wait and see.

To satisfy your need to do something, you can create formulaic dynamic calc members in the outline but be careful and test them across multiple intersections and there are some interesting calculations that could impact the results. I suggest limiting yourself to simple things like ratio calculations.     

 Data Retrieval

Using Smart View you get a FCCCS specific ribbon for forms

And one for FCCS  Ad-hoc

This appears to be built on the PBCS provider as you can select parent first or child first on zoom in operations. How long have we been asking for this in Essbase? Currently there is a bug in switching between aliases and member names in that if you have a list of members in your POV and you switch, it does not update the all of the names. It lease them as they are. The provider does not seem to be alias aware so if you select an alias while in member name mode, the name is not recognized and the retrieval goes to the top of the dimension. Oracle is aware of this issue and is working on it.

 Retrievals are definitely more HFM like in that when you are at the dimension member, you will not get any data returned! You have to select at least one level down in the hierarchy (sometimes more) to get your data. For Essbase guys this is annoying as you have to turn off and on suppress missing and while I’m not sure it will be changed, Oracle is aware of my annoyance with it.

FCCS also comes with the Web Based FR studio, It does not do 100% of what the client does, but has been updated with new charting that is more 21st century and more tablet friendly.


I think this product has a good direction and the Oracle team has been very helpful if answering my dumb questions about the product and how to implement it. The client is extremely excited as well as they can see how it with reduce their close cycle and give them better visibility, control and reporting.

I’ll be posting more in the future on FCCS as the implementation moves along, but this is enough to get you started



Thursday, June 2, 2016

All Dimensions in DataExport rows, Is it possible?

Since I’m getting ready for KScope, I thought I would throw a quick bit your way about the Dataexport calc script function. This question is asked a lot in various forms. Today the question on OTN was:

“I am doing a data export in a calc script. I have 12 dimensions in the cube. The output file is coming out fine except for one issue. The file does not write the name of the period member for which the data is being extracted. It writes the members of 11 dimensions but does not write the period.

If I fix on 1 month it will write a total of 12 columns 11 dimensions + 1 data column.
What I want is 13 columns (12 dimensions + 1 data column)

Am I missing something or is this how it is supposed to be?

Period is a dense dimension, The following are my export options:


{DataExportDynamicCalc On;

DataExportDecimal 12;

DataExportRelationalFile On;

DataExportOverwriteFile ON;


 If you want to see the thread on OTN, here it is:
My answer is the way DataExport works is to talk all of the dimensions except one dense dimension and make them row members, the final dimension, just like a grid retrieval is a column dimension. In the case of a DataExport the columns do not have a header.

In the Poster’s case, Periods was being set as the columns, but that is not necessarily always going to be the case. As one response suggested the poster could add:

DataExportColHeader Period;

That would insure that periods is being used as the columns, but would not give the poster what he wanted. In a 12 dimension cube, we will only get 11 row dimensions no matter what. The only way I know to get what the poster wants is to add a dense dimension to the cube. This dimension could be a single member dimension (just the dimension name). Since it is a single member it would not take any additional space or increase calculation or retrieval times. We could then set that dimension to be the column dimension and we would get all of our “real” dimensions as row members. If we can’t alter the existing cube, we could clone it add the dimension to the cloned cube, export (hopefully) only level 0 data, load it into the new cube and then use DataExport there to get the format of what we need. Granted that is extra work, but sometimes you do what you have to.

As a side note, you can make that new single member dimension useful Call it something like last_update and in your batch process, set the alias to be the data/time the cube was last updated.

Another possible solution would be to export all 12 periods every time so you always know what the columns are. If we need to really only get certain periods, we could do the DataExport to a relational table, export a sub variable and join to get the period we want.

I’ve seen similar questions about ordering the output and ignoring certain columns. Neither can be done. Dataexport, is fast because to grabs the blocks and outputs them. It does it in the order the dimensions exist in the outline (sparse first, then dense) just like an export. So be careful if you switch the order of dimensions or change something from dense to sparse or sparse to dense, the order in your DataExport will change. Because it is pulling the block, it has to write all dimensions.

Monday, April 4, 2016

Undocumented SQL Data Load Rule Change

Recently I had a client do an in place upgrade of Essbase from to The only thing that was upgrades was the EPM stack, The server and SQL server instance were not changed. Everything seemed to go smoothly except for one thing. We had a process where data is extracted from Essbase into SQL Server some data manipulation is done and the data is loaded back to Essbase. This worked great in but in the numbers in my SQL server table didn’t match what was being loaded into Essbase. Hmmm. What do I mean?

Well, I created this test sample for Oracle support. First I gave the then a table definition

CREATE TABLE [dbo].[test](

[Process] [nvarchar](50) NULL,

[Organization] [nvarchar](50) NULL,

[Produt] [varchar](50) NOT NULL,

[F_Code] [varchar](50) NOT NULL,

[Accounts] [varchar](50) NOT NULL,

[amt] [float] NOT NULL,


Next sql to populate the table with some test values.

insert into Test


insert into Test


insert into Test


insert into Test


insert into Test


insert into Test


insert into Test


When I retrieve from the table I showed them I got the following results:


Notice the last column amt matches what was loaded. Next I created a load rule and put in the following SQL statement

* From test

Since this was just a test, I didn’t bother to explicitly list the column names, but usually do and recommend doing that.

I retrieved the data and got:


I don’t know about you, but to me 1467149.93 does not match 1467149.875. If my math is correct it is off by .055. This would typically be no big deal, but these numbers were used in allocations and put the results out of balance by anywhere from 1 to 30 cents per allocation and the differences added up across the enterprise.

The response I got from Oracle surprised me. They told me it was not Essbase but the way SQL Server handled Floating point conversions. Huh? Sql Server had not changed in my environment and worked before. I wasn’t buying their response and rejected it completely. So they went back to development. I kept getting stalled with the similar answers about it not being Essbase but SQL Server and they kept pointing me to web links talking about floating point math.

https://msdn.microsoft.com/en-us/library/ms173773(v=sql.90).aspx https://msdn.microsoft.com/en-us/library/ms173773(v=sql.100).aspx https://msdn.microsoft.com/en-us/library/ms173773(v=sql.110).aspx.

Ok, I get it, I understand floating point math and how it is an approximation, but When SQL shows we the correct answer and Essbase load rules don’t, how can it be a SQL Server issue. I used the analogy with support that I’ve looked at a painting for years and it was clear as crystal. When I got a new pair of glasses, the painting got fuzzy, so that means the fault is in the painting and was not caused by perhaps an incorrect new prescription?

After literally months of going back and forth, I finally got an answer, I didn’t like it, but at least it made sense. I was told:

“A change was made in PS4 that explains the difference between the versions. Previously, all values were binded to SQL_C_CHAR type and the drivers were responsible of doing the conversion while displaying the values. In PS4 the values get binded according to their type (i.e. float is getting
binded to SQL_C_FLOAT). Therefore, the value is converted accordingly by Essbase and not by the driver. This behavior is considered correct.
You can see that the value is indeed stored correctly using a workaround
while creating the rule file: instead of selecting the amt column as is, use str(amt, 25, 2).
The syntax is: STR ( float_expression [ , length [ , decimal ] ] ). “

So now I have to go back and re-code all the SQL to convert the data type. It would have been nice if Oracle documented the change or even better, set an option to use the old coding. But at least I have an answer and so do you in case you are reading this because you did a google search because you are having a problem.

Wednesday, March 16, 2016

Don't be a Glenn

Sometimes old errors are caused by different reasons than what originally caused them.  Case in point, I was working on a batch and MaxL script to automate an Essbase Studio deploy command in  I ran the script and got an Error message  “Essmsh exe has stopped working” and the batch file crashed.  Thinking I did nothing wrong, I googled the error message and got multiple references to a document statin this was a bug and to use the 32 bit version of MaxL instead of the 4 bit version. 

This would have been fine except for the fact that I had other deploy commands working fine using the same Essmsh command. In addition, the document was for and was fixed in   I went around and around, changing dimension order as I remembered I had issues in the past with the accounts dimension being first. Nothing I did to the script or the outline mattered.

I spent way too much on this and was getting very frustrated. The process actually did produce a log which I finally looked at and found
Caused by: Cannot end incremental build. Essbase Error(1060053): Outline has errors
\\Outline verification errors:

\\Member Income Stmt Calc has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

\\Member REPORTING_STATS has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

\\Member MISC has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

Hmm, So I went back into the Essbase Model properties and lo and behold the setting for data storage was set to use existing.  I do have a column in my data source for storage type which I forgot to set as an external source. I reran and it worked like a charm.  This made sense as I had set the accounts dimension to rebuild. The accounts would have all been deleted and not re-added properly.

The Moral of the story is twofold. First, When you have an issue don’t assume you did everything right the first time (I didn’t) and second, don’t assume that what you read on google for older versions is going to solve your issue.

Tuesday, December 29, 2015

A Guest Post Reviewing Developing Essbase Applications Hybrid Techniques

OK, I’m just the middleman here. Tracy McMullen asked me to post her review of Developing Essbase Application: Hybrid Techniques and Practices. These are all of her words. I’ve not changed anything in this review except to put quotes around it.

Truly Taking Your Essbase Knowledge to the Next Level
You know how sometimes when a sequel to a movie or good book, comes out… it just stinks compared to the first release? For instance, Dumb and Dumber To or Speed 2… Other movies and books know that releasing a sequel to its classic first release just wouldn’t be the same and they smartly do not follow up their product with a sequel (like Old School or E.T.). Cameron and team’s first edition of Developing Essbase Applications: Advanced Techniques for IT and Finance Professionals was valuable addition to all Essbase administrator’s libraries. So how does their second edition (or “sequel”), Developing Essbase Applications: Hybrid Techniques and Practices fair? This new book follows in the footsteps of sequels like Empire Strikes Back, The Godfather II or The Dark Knight, taking the great things about the first edition and then improving upon them in the following release. They’ve hit another homerun, providing an invaluable tool to the Essbase community.
This Essbase book has something for every type of Essbase consumer, from the super techy Essbase administrator to the IT / infrastructure team supporting Essbase to the brand new Essbase administrator and finally the end user. So buy at least one (or more) for your organization and share with your Essbase stakeholders. The Developing Essbase Applications Hybrid Techniques and Practices writing team (John, Martin, Tim, Cameron, Glenn, Mike, William) has given us this handy toolkit with code examples and reusable code and detailed explanations of simple to complex topics like design best practices, how some of the new features work under the covers, and detailed “how to” steps so that you can follow along. When it comes to testing new features, they’ve done the testing for us.
A few of my favorite parts of the book: John provides general guidance on Essbase server configuration with Exalytics environments and seeing the testing results of just how powerful Exalytics can be. If you are looking to upgrade and purchase new hardware, read the Exalytics chapter! This might give you some of the ammunition to make your case for Exalytics in the purchase debate. I loved Martin’s Magical Number 7 and how it applies to Essbase dimension design. Even experienced admins can benefit from this design best practices chapter. All of the new buzz around hybrid cubes is really exciting! But what is it exactly? Tim and Cameron dissect the new hybrid option for Essbase and share actual performance results. You’ll find some interesting (and surprising) results. Glenn’s chapter on SQL for Essbase is a must read for every Essbase administrator. It helps both the IT developer and the business user understand how SQL can be utilized for loading data into Essbase, extracting data from Essbase back to relational targets. If you’ve ever wanted to load test your Essbase environment, Tim’s chapter will show you the way to accomplishing this tricky task. As of mom of two sibling girls, I completely appreciated the analogy OBIEE and Essbase provided Mike’s chapter, Copernicus was Right. Essbase isn’t the center of the universe? Why symmetry matters? Mike rocks the boat a little in this part of the book but shows how to really address challenges of Essbase and OBIEE integration. If you aren’t familiar with Dodeca, check out Cameron’s chapter on this alternative tool for end users to interact with both Essbase and relational sources. William’s Smart View chapter breaks down all of the different query options available within Smart View (did you even know these options existed?). He provides a super helpful comparison chart then deep dives into the content with examples on the different ways to use Smart View to interact with data.
Developing Essbase Applications Hybrid Techniques and Practices is not just a high level book. This is a roll up your sleeves and jump in the weeds kind of book. There is a LOT of information which can be overwhelming at times (but really that is a good thing). Reread if you need to because the all of the details are there to learn about Essbase. I concur with Steve Liebermensch who wrote the Afterword, add this book to your shopping cart and pay for it already! I’m certain you will learn something new (likely a lot of something new’s) that will help you in your journey with Essbase. “

Monday, November 9, 2015

Essbase 12C is here or is it?

A few weeks ago when I was at Oracle Open World, there was a big to do, partially started my me. Someone had posted that Essbase 12C had been released.  Thinking quickly, I immediately found the readme and saw that EAS and Essbase Studio were not supported. I couldn’t fathom that, going back to command line to  build cubes? and how would load rules get created.
I was hoping that perhaps, it had the new simplified interface that Gabby Rubin has been hinting about (and I actually got to see at OOW  -- It is awesome), but alas no.
It turns out this 12C release is for use with Oracle Business Intelligence Enterprise Edition (OBIEE) 12c that was released the Friday before.  It does have cool things we can expect later in our own EPM version of Essbase.

First it is the new JAVA kernel, so I’m happy to let the OBIEE people debug it for us. This Java is supposed to be first released as EssCloud service. so don’t expect it any time soon in-house. I would say on premise, but apparently Larry is rebranding on premise to be some kind of private cloud, or so the story goes.
Second, It supports all of the functions including cross-dim in hybrid mode. That is huge. It means they figured out cross dim issue performance. I can’t wait to try it.

Finally, They are putting Essbase in memory. The real use case for Essbase with OBIEE is for OBIEE to spin off Essbase cubes for caching data to make OBIEE reports faster. we won’t be able to get at these cubes, but OBIEE reports will be.
So for OBIEE (at least) a REALLY cool version of Essbase is available, but for us EPM-ers we will have to wait. may uncle Gabby, will give me a version for Hanukkah to play with. or a version for each night. I love new toys. 

Tuesday, July 7, 2015

Exalytics X5-4 Fast and Furious

I love going to KScope because I learn about new features and products. This event was no different. In the Sunday symposiums with Oracle there was a discussion on the new Exalytics X5-4. It was only Sept last year at Open World when the X4-4  was announced . Edward Roske talks about it in his Blog. It was a big deal then. With the introduction of the X5-4 only 9 months later it becomes even bigger, better and “Badder”. With the X5-4 we go to a max of 72 cores, up from 60 and more memory. In addition to more cores, the X5-4 supports a new NvMe High bandwidth flash technology that improves throughput by 2.5 times. I won’t bore you with the details if you want to read about them , here are the specs

To me the most remarkable thing about this is you get more and the price has not increased. All of the way back to the X3-4 the price has remained the same. With a list price of $175K it is what I consider cheap.

As John Booth mentions in his Blog, you can get this as an X5-2 configuration as well offering additional flexibility. Note I had a correction from John. The X5-2 was more a wish from him than a reality. While you could create a X5-2 using sub-capacity licensing, you are still paying for the physical cores (Thanks Steve Libermensch for that clarification)

For us in EPM it keeps getting better and better.