Why I created a blog

Its been four years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.

Friday, September 23, 2016

Migrating from EIS to Essbase Studio

The week at Open World for me was sessions, lots of meetings with Oracle and meetups. At one I was talking with Aneel Shenker the product manager for Essbase Studio. We somehow got on the topic of migrating from EIS to Essbase  Studio. I told him that I typically rebuild the models because it is difficult to get the migration wizard to work.

He agreed that it was very under documented and that is why a little while back, he investigated the process to get it working (and he says it works pretty good).  He created two white papers on the migration
Oracle Support Document 2094412.1 (EIS - Essbase Integration Services to Essbase Studio migration) for the migration and
Oracle Support Document 2095995.1 (Troubleshooting the Essbase Integration Services (EIS) Migration Tool within Essbase Studio)
for trouble shooting migrations. To get to either of these documents you have to sign in to  Oracle support with your ID.
For the migration it seems pretty straight forward:

From the migration document (some of the formatting is mine)

1 Ensure that your EIS metaoutlines and models complies with the guidelines listed in the Essbase Studio user's Guide in the Chapter "Guidelines for Migrating Metaoutlines and Models" and "Properties Not Migrated".
2. Install Studio Console on a 32-bit machine. Note: If you are planning to install the EPM bundled ODBC drivers it should be a server operating system.
3. On the 32-bit machine where Studio console is installed, install ODBC drivers that works with the EIS catalog database.
Note: You can install the DataDirect ODBC drivers that are shipped with EPM by installing EPM Foundation Components. However, there is no need to run the EPM System configurator, installation is enough to install the drivers.
4. Create a User DSN that can connect to the EIS Catalog database. Note: It must be a User DSN, not a System DSN.
5. Ensure that you can connect to the EIS Catalog database through the ODBC User DSN on the 32-bit machine. Note: EIS Server does not need to be running
6. Ensure Studio Server is running. Note: Studio Server can be running on a separate 64-bit machine.
7. In Essbase Studio, create a data source that points to the same source database that was used for EIS.

EIS to Studio Migration Steps
1. Start Studio console on the 32-bit machine and connect to the Studio Server where the EIS catalog should be migrated to.
2. In Studio console, start the EIS migration tool. (Tools-> Essbase Integration Services Catalog migration)
3. Fill in the "EIS Catalog ODBC DSN field", the "User' and "Password" field.
4. Click the "Fetch models" button, and the EIS models should be listed.
5. You can now choose
- the matching Data source connection that was used for EIS. Note: Typically different from the EIS Catalog connection
- the EIS Catalog and Schema from the drop down. Note: If your relational database is Oracle, selecting a catalog is not required.
6. Select a folder where the migrated metaoutline and model  should be put
7. Click the Migrate button
8. If there are any errors, double-check check that all documented guidelines are followed.
Note: If the migration does not work, it is possible that it is needed to rebuild the models from scratch in Essbase Studio. There are cases that are too complex for the EIS Catalog migration to work.
You should largely still be able to use the source tables that you were using with EIS, although a few tweaks might be needed.

Thank you Aneel for providing the community with this much needed information

Thursday, September 22, 2016

Quick Smart view post

This will just be a short post that might help you out.
A number of Ace Director friends and I met with the Oracle Smart View Team at Oracle Open world this week and evangelized for you all on changes to the product. More posts will be coming out from all of us on some things we learned and if you follow us all there might be overlap. We just want to get the word out to as many as we know.

One thing that came out during the meeting is there are two Excel Add-ins that cause issues for Smart View and it is suggested you turn them off when using Smart View.
The first is the Webex add-in and the second is the Adobe PDF add-in. something in the way they take control of spreadsheets messes with smart view functionality and stability.

I'll be putting a slightly longer blog post together later on some Knowledge base articles that help with fixing some common problems.

Thursday, September 8, 2016

FCCS Dimension Security

Just a short post today. Everyone else in the world probably knows this, but I learned something new working with FCCS and dimension security. I was having an issue where I had not set up security on a number of dimensions, but when I used a non-admin user, and tried to retrieve data in Smart View I got nothing more than the dimension name and #No access

It took working with Development to figure out the issue. It turns out in FCCS you can “Unsecure” dimensions that were secured in FCCS (and probably PBCS and EPBCS). When I set up the application, it secured all of the dimensions, without me knowing it. In order to see the members, I had to actually add security rules to every dimension to get data.

That should not have been necessary.  What I discovered (with oracle Help) was that at the dimension level, you have to turn on or off security. There is a check box that sets it. Last I remember, on-prem, this was greyed out and could not be unset once set. In FCCS, you can turn it on or off.
Once I removed the security rules I added and unchecked it, everything worked hunky dory! I could see the hierarchies and my retrieval now retrieves data.

Wednesday, August 31, 2016

A glimpse into FCCS

I’ve been an Essbase developer for about 25 years! In that time I’ve also done some Planning and reporting, Essbase Studio, OBIEE, BICS and a few other things, but my first love has been Essbase. So when I was offered a chance to do the very first implementation of Financial Consolidation and Close Cloud Service (FCCS) I was a little shocked, as I’m not a finance guy. We have a client that wanted to be an early adopter so I jumped in without looking.  Since FCCS is built on Essbase I thought how different can it be? It would be fun to find out. Needless to say it is much more different than I expected.  What I have learned so far is there is a learning curve whether you are an “Essbase Guy” or a “HFM Guy”. In this blog, I’m going talk about some of the things I found odd for an Essbase developer.

First of all, If you have done a PBCS or even better an EPBCS implementation, you will be very familiar with the interface, FCCS is built on the EPBCS framework and most of the things look the same. As a matter of fact, the labels often still say Planning instead of FCCS.


First, Most of the dimensions are predefined. There are 11 out of the box dimensions and two custom dimensions. If you choose Multi-GAAP you only get one custom dimension. The predefined dimensions are:

·       Accounts

·       Entity

·       Scenario

·       View

·       Movement

·       Data Source

·       Currency

·       Periods

·       Years

·       Consolidation

·       Intercompany

In all of the dimensions, there are a number of predefined members. Most of the members are prefixed with FCCS_. It is very important that you do not change these member names as calculations and other processes depend on knowing what these members are.

While you can’t change the name of these members, you can add aliases to make them friendlier and more familiar to the users. In addition, you are not stuck with just these members, you can add your own members below or above these members. What I did find weird is for the balance sheet members, we do not set time balance on the members. All members are set to Flow.  The account dimension is the only dimension set to dense so all upper level members are set to Dynamic calc.

Second in the Entity dimension, all of the members in the Hierarchy are set to Non-consolidating, i.e. a tilde ~. I found this to be very strange. Apparently they handle the consolidation in the calc script. By the way you can’t see the calc script. Also, it this release, you shouldn’t add alternate rollups in the entity dimension. The calculation logic uses the hierarchy for eliminations and adding an alternate messes that up. I have been told in some future releases, you will be able to have them.

Next on dimensions, I think they are doing something odd as all non-level zero members need to be tagged as never share.  While in the regular Essbase world it would matter as long as a parent has more than one consolidating child, it apparently matters in FCCS. We inadvertently set the dimension member to store and it affected the consolidation calculation giving us bad results.

I won’t go into how you have to map Balance sheet members to particular movement dimension members to get the Cash flow to work, which is a topic in itself.


For you Essbase nerds out there, who think you can just code anything you like in FCCS, you will sadly disappointed. As mentioned you cannot see let alone modify the built in calc scripts (there are basically two calc scripts (Consolidate, Translate and compute rates) the rest of the rules shown are for built in processing and are not run by a user

Ok, what about building your own rules. Sorry, in this release (and probably many to come) you are not allowed to build your own rules. Because of the complexity of how data interacts, they have locked this down. There is talk of in the future allowing some very specific rules to be written but we will have to wait and see.

To satisfy your need to do something, you can create formulaic dynamic calc members in the outline but be careful and test them across multiple intersections and there are some interesting calculations that could impact the results. I suggest limiting yourself to simple things like ratio calculations.     

 Data Retrieval

Using Smart View you get a FCCCS specific ribbon for forms

And one for FCCS  Ad-hoc

This appears to be built on the PBCS provider as you can select parent first or child first on zoom in operations. How long have we been asking for this in Essbase? Currently there is a bug in switching between aliases and member names in that if you have a list of members in your POV and you switch, it does not update the all of the names. It lease them as they are. The provider does not seem to be alias aware so if you select an alias while in member name mode, the name is not recognized and the retrieval goes to the top of the dimension. Oracle is aware of this issue and is working on it.

 Retrievals are definitely more HFM like in that when you are at the dimension member, you will not get any data returned! You have to select at least one level down in the hierarchy (sometimes more) to get your data. For Essbase guys this is annoying as you have to turn off and on suppress missing and while I’m not sure it will be changed, Oracle is aware of my annoyance with it.

FCCS also comes with the Web Based FR studio, It does not do 100% of what the client does, but has been updated with new charting that is more 21st century and more tablet friendly.


I think this product has a good direction and the Oracle team has been very helpful if answering my dumb questions about the product and how to implement it. The client is extremely excited as well as they can see how it with reduce their close cycle and give them better visibility, control and reporting.

I’ll be posting more in the future on FCCS as the implementation moves along, but this is enough to get you started



Thursday, June 2, 2016

All Dimensions in DataExport rows, Is it possible?

Since I’m getting ready for KScope, I thought I would throw a quick bit your way about the Dataexport calc script function. This question is asked a lot in various forms. Today the question on OTN was:

“I am doing a data export in a calc script. I have 12 dimensions in the cube. The output file is coming out fine except for one issue. The file does not write the name of the period member for which the data is being extracted. It writes the members of 11 dimensions but does not write the period.

If I fix on 1 month it will write a total of 12 columns 11 dimensions + 1 data column.
What I want is 13 columns (12 dimensions + 1 data column)

Am I missing something or is this how it is supposed to be?

Period is a dense dimension, The following are my export options:


{DataExportDynamicCalc On;

DataExportDecimal 12;

DataExportRelationalFile On;

DataExportOverwriteFile ON;


 If you want to see the thread on OTN, here it is:
My answer is the way DataExport works is to talk all of the dimensions except one dense dimension and make them row members, the final dimension, just like a grid retrieval is a column dimension. In the case of a DataExport the columns do not have a header.

In the Poster’s case, Periods was being set as the columns, but that is not necessarily always going to be the case. As one response suggested the poster could add:

DataExportColHeader Period;

That would insure that periods is being used as the columns, but would not give the poster what he wanted. In a 12 dimension cube, we will only get 11 row dimensions no matter what. The only way I know to get what the poster wants is to add a dense dimension to the cube. This dimension could be a single member dimension (just the dimension name). Since it is a single member it would not take any additional space or increase calculation or retrieval times. We could then set that dimension to be the column dimension and we would get all of our “real” dimensions as row members. If we can’t alter the existing cube, we could clone it add the dimension to the cloned cube, export (hopefully) only level 0 data, load it into the new cube and then use DataExport there to get the format of what we need. Granted that is extra work, but sometimes you do what you have to.

As a side note, you can make that new single member dimension useful Call it something like last_update and in your batch process, set the alias to be the data/time the cube was last updated.

Another possible solution would be to export all 12 periods every time so you always know what the columns are. If we need to really only get certain periods, we could do the DataExport to a relational table, export a sub variable and join to get the period we want.

I’ve seen similar questions about ordering the output and ignoring certain columns. Neither can be done. Dataexport, is fast because to grabs the blocks and outputs them. It does it in the order the dimensions exist in the outline (sparse first, then dense) just like an export. So be careful if you switch the order of dimensions or change something from dense to sparse or sparse to dense, the order in your DataExport will change. Because it is pulling the block, it has to write all dimensions.

Monday, April 4, 2016

Undocumented SQL Data Load Rule Change

Recently I had a client do an in place upgrade of Essbase from to The only thing that was upgrades was the EPM stack, The server and SQL server instance were not changed. Everything seemed to go smoothly except for one thing. We had a process where data is extracted from Essbase into SQL Server some data manipulation is done and the data is loaded back to Essbase. This worked great in but in the numbers in my SQL server table didn’t match what was being loaded into Essbase. Hmmm. What do I mean?

Well, I created this test sample for Oracle support. First I gave the then a table definition

CREATE TABLE [dbo].[test](

[Process] [nvarchar](50) NULL,

[Organization] [nvarchar](50) NULL,

[Produt] [varchar](50) NOT NULL,

[F_Code] [varchar](50) NOT NULL,

[Accounts] [varchar](50) NOT NULL,

[amt] [float] NOT NULL,


Next sql to populate the table with some test values.

insert into Test


insert into Test


insert into Test


insert into Test


insert into Test


insert into Test


insert into Test


When I retrieve from the table I showed them I got the following results:


Notice the last column amt matches what was loaded. Next I created a load rule and put in the following SQL statement

* From test

Since this was just a test, I didn’t bother to explicitly list the column names, but usually do and recommend doing that.

I retrieved the data and got:


I don’t know about you, but to me 1467149.93 does not match 1467149.875. If my math is correct it is off by .055. This would typically be no big deal, but these numbers were used in allocations and put the results out of balance by anywhere from 1 to 30 cents per allocation and the differences added up across the enterprise.

The response I got from Oracle surprised me. They told me it was not Essbase but the way SQL Server handled Floating point conversions. Huh? Sql Server had not changed in my environment and worked before. I wasn’t buying their response and rejected it completely. So they went back to development. I kept getting stalled with the similar answers about it not being Essbase but SQL Server and they kept pointing me to web links talking about floating point math.

https://msdn.microsoft.com/en-us/library/ms173773(v=sql.90).aspx https://msdn.microsoft.com/en-us/library/ms173773(v=sql.100).aspx https://msdn.microsoft.com/en-us/library/ms173773(v=sql.110).aspx.

Ok, I get it, I understand floating point math and how it is an approximation, but When SQL shows we the correct answer and Essbase load rules don’t, how can it be a SQL Server issue. I used the analogy with support that I’ve looked at a painting for years and it was clear as crystal. When I got a new pair of glasses, the painting got fuzzy, so that means the fault is in the painting and was not caused by perhaps an incorrect new prescription?

After literally months of going back and forth, I finally got an answer, I didn’t like it, but at least it made sense. I was told:

“A change was made in PS4 that explains the difference between the versions. Previously, all values were binded to SQL_C_CHAR type and the drivers were responsible of doing the conversion while displaying the values. In PS4 the values get binded according to their type (i.e. float is getting
binded to SQL_C_FLOAT). Therefore, the value is converted accordingly by Essbase and not by the driver. This behavior is considered correct.
You can see that the value is indeed stored correctly using a workaround
while creating the rule file: instead of selecting the amt column as is, use str(amt, 25, 2).
The syntax is: STR ( float_expression [ , length [ , decimal ] ] ). “

So now I have to go back and re-code all the SQL to convert the data type. It would have been nice if Oracle documented the change or even better, set an option to use the old coding. But at least I have an answer and so do you in case you are reading this because you did a google search because you are having a problem.

Wednesday, March 16, 2016

Don't be a Glenn

Sometimes old errors are caused by different reasons than what originally caused them.  Case in point, I was working on a batch and MaxL script to automate an Essbase Studio deploy command in  I ran the script and got an Error message  “Essmsh exe has stopped working” and the batch file crashed.  Thinking I did nothing wrong, I googled the error message and got multiple references to a document statin this was a bug and to use the 32 bit version of MaxL instead of the 4 bit version. 

This would have been fine except for the fact that I had other deploy commands working fine using the same Essmsh command. In addition, the document was for and was fixed in   I went around and around, changing dimension order as I remembered I had issues in the past with the accounts dimension being first. Nothing I did to the script or the outline mattered.

I spent way too much on this and was getting very frustrated. The process actually did produce a log which I finally looked at and found
Caused by: Cannot end incremental build. Essbase Error(1060053): Outline has errors
\\Outline verification errors:

\\Member Income Stmt Calc has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

\\Member REPORTING_STATS has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

\\Member MISC has no value source.  Aggregate storage outlines require non level-0 members in accounts dimension to be either non-stored members or have internal/external formula.

Hmm, So I went back into the Essbase Model properties and lo and behold the setting for data storage was set to use existing.  I do have a column in my data source for storage type which I forgot to set as an external source. I reran and it worked like a charm.  This made sense as I had set the accounts dimension to rebuild. The accounts would have all been deleted and not re-added properly.

The Moral of the story is twofold. First, When you have an issue don’t assume you did everything right the first time (I didn’t) and second, don’t assume that what you read on google for older versions is going to solve your issue.