Why I created a blog

Its been many years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.


Monday, July 13, 2009

3rd addendum to DataExport

My last blog post talks about dropping columns when exporting to relational using the dataexport command.
I found the answer looking through the knowledgebase. Turns out you have to set
DEXPSQLROWSIZE 1 to get it to work. I don't know if I like tihs answer as it requires you to basically turn off bulk insersion, but at least it works and I can continue with my development withput having to asdd jexport to the mix.

Friday, July 10, 2009

2nd Addendum to Data Export

OK, I'm starting to see why people don't like Data Export. I'm trying to export a particular set of data to a relational table and for some reason, one of the dimensions is not showing up in the table. If I change the export to write to a file, the column is there. It is interesting that the order of the row members is different in the flat file than in the relational export. What appears to be happening is it is taking one of my two dense dimensions which shows up as the last row member before my data vailes in my flat file and is moving(or perhaps overlaying) what is my second row member in the flat file. I know that is clear as mud so to show you an example a row my flat file export looks like:

Actual,r1,aaa1,bbb1,ccc1,ddd1,eee1,Tier1,123456.789

In the relational it looks like:
Actual,Tier1,aaa1,bbb1,ccc1,ddd1,eee1,blank,123456.789

so it is shifting or overwriting the R1 with Tier1 and where tier1 should be is blank.

If anyone has figured a workaround for this other than to export to a file and load it into relational, let me know. In the meantime I'm opening a SR with Oracle and se eif they have a fix

Tuesday, July 7, 2009

Addendum to Data Export post

As an addendum to my post on Data export, John Goodwin reminded me of a work around I had to do. There is a problem when you are exporting to a relational table and your columns dimension does not have values in the trailing members (for example you have Jan-Dec in columns and you only have data in Jan –Mar) the data export will fail because the record columns don’t match the table columns. . In order to get around it, I set :
DataExportDynamicCalc On
And
DataExportLevel ALL

Then in my fix statement, I had to make sure that the last column of the load would always have data. Testing it by sending it to a flat file confirmed that if the last column had data, the intermediate columns would have something as well. In Sample Basic if Years is your column dimension, in your fix statement you could fix on Jan:Dec and Year. Since Year is a dynamic calc member if any month has data so will Year. Note, this could be be a problem if you have time balance accounts without skip missing turned on). Because you need to set all levels, you also have to make sure you fix on the level zero members of your other dimensions as well. I set the other dimensions to @relative (dimension name, level zero). Since the last column had data it would load to the end.

In talking with others, they have had problems with Data export, So far, I’ve not found anything I could not work around.

Monday, July 6, 2009

Dataexport is great

I’ve had the need to work with the newish dataExport command (I did it in 11.1.1.2) recently and thought I would share some things I’ve found with it and with Sql interface in load rules. I think you will find my musings interesting.

First, I was using data export to export date to a flat file to act as an audit report for some users. It worked like a charm. Some of the things I found are if you specify to export level zero, no matter what you put in your fix statement it will only look at level zero blocks. Using DataExportDynamicCalc on allowed me to export dynamically calculated members as well. For my 20K rows it did not seem to slow the export down, ut I don’t know the impact on a big data set. I could also specify DataExportDimHeader ON to get column names. Using DataExportColHeader "Dense dimension name" I could specify the dense dimension I wanted as the column. It would have been nice if I could put a single member of a sparse dimension there, but I understand why I can't.

Next I needed to back up some of the static information from my cube. Static in that it is user input for my testing and I didn’t have an easy source to reload from. I set up a fix statement and used the binfile option (DATAEXPORT "Binfile" "fileName"). It created a file that I could reload to my db. I can see the usefulness of this on a production database where you need to save off something like budget, clear the database and reload it with source actual and the budget. It’s much easier than the old export and reload and much quicker. In addition, you can be selective of what you export instead of exporting everything.

Finally, I needed to load data populate a Sql database with data from Essbase, modify it and load it back. Yes there are some things that Sql can do better than Essbase. In this case, It was to take two disassociated sets of data and merge them together. It needed to join them on the one common dimension member and basically turn the two 3k row tables into about 1.5 million records that get loaded back into Essbase. I set up the ODBC driver with no problem and exported the data into flat files to see their structure. I then created tables that matched the structures. I will say that there is where I had minor difficulty. If the columns don’t match exactly, the export fails with little information (Just the typical messages in the log that tell you “you screwed up”). I played around with the table figuring out that I miscounted the columns and fixed it and it worked fine. I defined the amount columns as float and found that for #missing values Essbase stuck -2e-15 in the columns that were once #missing in Essbase. A quick stored procedure and I converted them to null.

Oh but wait, how could I run the Stored procedure. I could run an OSQL but the instance I was working on the tools are not working right. I could get into Sql Server, but could not run OSQL or Bulk insert. So thinking swiftly, I thought of load rules. A load rule is just supposed to take Sql commands, so how could I get it to run a stored procedure. I know I can put statements like a union in a Sql statement, So I tried something like:

Getdate()
Exec myStoredprocedure.

I clicked ok/retrieve, entered my id and password, and lo and behold, I got the system date back into my load rule. I checked my tables and the bad characters were converted to nulls. Wow it worked. Who would have thought? I figured I could use the load date to update the alias of a member to set the last time the calculation was run. Another suggest I had was to use rejection criteria to reject the row and load nothing. I used this technique to run another stored procedure that truncated and populated a table from the tables I loaded, so my next step was to create a load rule and bring the data back in. Everything was done without having to resort to anything but MaxL statements.

I’ve since added a custom defined function that will run the SQL statement directly from the calc script. I got this from Touifc Walkim the development manager for Smartview, a very nice guy and CDF guru. Some clients don’t like the idea of CDFs so I have my origina method available when necessary. IF this works, you can get the CDF here



Adding DATAEXPORTENABLEBATCHINSERT TRUE to my config file made the process run faster as it allows the dataexport to use a batch insert method(when the ODBC driver allows it).

As I use Dataexport more, I’ll comment on my findings, but I have to say I’m impressed so far. I have asked for enhancements to the command, and they have been added to the enhancement list. I’m interested to see if or when they are implemented. I was very happy when I was told they were added to the list since it appears that Oracle is open to hear what people recommend. Some of the things I recommended were:
Add an append option to the file export so you could have multiple fix statements write to the same file
Add a debug option to the SQL export to see the SQL statement generated to make it easier to debug when you have problems
Allow aliases to be exported instead of file names.

Let’s see if these get implemented.