While most of my posts are technical or Hyperion related, I think this one is informational as well, but in a different way. I got this in an email this morning from Alaska Airlines and I think they are trying to tell us something. What do you think?
Why I created a blog
Wednesday, May 1, 2013
Were I Cameron Lackpour, I would call these stupid pet tricks, but since I’m not, I’ll say they are issues I’ve encountered. Luckily I’ve resolved them so perhaps I can save you the pain I went through.
The first issue I encountered when I tried to use a Custom Defined Function(CDF) that runs a SQL statement or stored procedure from within a calc script. This function was written by Toufic Walkim(Thanks you) and was given to me a while ago. I’ve used this a few times at different clients but on older versions of Essbase. In trying to get it to run on 220.127.116.11 I encountered a number of issues.
First, It could not find the correct ODBCJDBC driver. That was resolved by downloading the driver from Microsoft and changing the properties file to point to it (or so I thought). Turns out there are two drivers downloaded. ODBCJDBC.DLL and ODBCJDBC4.DLL. After experimentation, I had put the ODBCJDBC.DLL in the UDF directory and got an error that basically said, I need to use ODBCJDBC4.DLL. Adding it to the directory did not solve the issue even if I removed the ODBCJDBC.DLL. So thinking swiftly (Ok, I was pretty slow) , I renamed ODBCJDBC4.DLL to be ODBCJDBC.DLL. Voilà, now it recognized the driver and knew it was the correct one.
My next issue was that once connected, even if trying to run a simple SQL delete statement, the calc script would hang and I would have to kill the process. Thanks to help from Robb Salzmann narrowing the issue down , I was able to Google a few things and found a bug published by Sun that basically says the version of the JDK installed with 18.104.22.168 will hang on connections. I found a later version of the JDK (jdk160_43 to be exact), installed it in the Oracle\middleware directory and pointed the JVMModulelocation parameter in the Essbase.cfg to use it. Now my life is good and the CDF works fine. I did need to remember to bounce Essbase and it took me a while to remember what I needed to do to get Essbase to run in foreground so I could see the messages in the application window (But that is another story)
My next opportunity was with Essbase Studio(22.214.171.124). I was trying to build dimensions and got an error “Cannot get async process state”. I started investigating and found the errors were all with my Entity dimension. If I built without that dimension, everything worked fine.
I should mention I’m not the only one working on the model. My client has SQL developers working to create views and add content. So I looked further and did a refresh of the Entity View. Imagine my surprise when I found there were columns removed from the view I was using in one of my alias tables.The Studio table refresh would not let me update the view since it knew something was being used that it was trying to remove. I tried having the column added back to the view, but still could not get they refresh working. So I went through my Essbase model properties and removed the alias table the column was in, then went into the Alias table and removed the column from there as well. I was now able to refresh the view with the column changes. Moral of the story, if you get the message, see if your data source changed.
I’ve been reading the blogs and readmes for 126.96.36.199 and like the new features added. While Essbase Studio really only got bug fixes and Essbase itself only got a few new changes, I like what I see and can’t wait to try ASO planning.
Wednesday, February 13, 2013
Typically I don’t blog about the company I work for, devoting my time to more technical articles, but I decided to deviate a little today to talk about a couple of things.
Second, interRel is hiring. If you have experience in the Hyperion line of products or OBIEE and are looking for a cool consulting firm to work for, we could be interested in you. interRel is a great company to work for and we believe in consultant growth and training as importantly as we believe in customer service. I’m not going to give you the marketing on why you should join interRel, you probably already know. If you have an interest in talking to us, email email@example.com
finally , some news not associated to interRel in any way. If you plan to be in New Zealand or Australia next month, why not attend ODTUG’s Seriously Practical Conference in Melbourne March 21st and 22nd or the NZOUG conference on March 18th and 19th at Te Papa, Wellington (sounds like a kind of steak to me). Sounds like a great way to get a paid vacation, Go to the conference and see the countryside. This is truly not associated with interRel as we will will not be speaking at either conference, but my friend, mmip, Cameron Lackpour put together the EPM agenda for both conferences and will be speaking there. If you are there, tell him Glenn said to say Hi. That is the secret phrase and he might have a present for you (not really, It would just be fun).
Friday, February 8, 2013
I have been speaking at a lot of conferences and client events over the last year touting the great new features of the new version of Smart View. As a matter of fact, I’m repeating the session on Feb 16th and 18th in interRel Webinars. (contact firstname.lastname@example.org for more info) The questions I often get are “What do I need to upgrade to get functionality” and “If I just upgrade Smart View what functionality do I still get?”
Before I answer thos questions I’ll first answer “What version of Smart View should you upgrade to?” Since Smart View is pretty backward compatible I would upgrade to the latest version 188.8.131.52.310. This is a patch that was released last Monday and includes connectivity to OBIEE or as many Oracle people now call it BIFS (Business Intelligence Foundation Suite). Note, you must be on OBIEE 184.108.40.206 for the connectivity to work. There are a number of other enhancements and bug fixes to this version.
Next, what you you have to upgrade to get full connectivity? Well, for Smart View itself, you should be on 220.127.116.11.102 (or higher) as well as APS and Essbase 18.104.22.168.102 or higher. If you are going the patch route or 22.214.171.124 then I would recommend you get the Smart View patch 126.96.36.199.103 and APS and Essbase patches 188.8.131.52.104. Of course , you are even better if you upgrade to 184.108.40.206.310.
Finally, If you just upgrade Smart View without upgrading APS or Essbase, the functionality you can expect to get (Thank you Smart View development team for this list) is:
- Ribbon specific to Provider
- Re-designed Options Dialog
- Smart View Panel
- Sheet level Options
- Retain Excel Formatting
- Improved Function Builder
- Fix Links (Removes path references in cells with Smart View functions) Before: “=D:\Oracle\Smartview\bin\hstbar.xla’HSGETVALUE(…..
- After: =HSGETVALUE(...
- Performance Improvements
So while you won’t get cool things like multiple grids on a sheet or member name and alias, you get a few perks. The sheet level options are one thing that was missing in prior versions of Smart View that I’m glad got put in.
If you upgrade to at least 220.127.116.11 you also get an awesome new Smart Query tool that enables you to create extremely complex queries that return sets of members/numbers which can be combined and saved. It is fantastic feature that is not getting the press it deserves. In a future post, I‘ll go through a detailed example so you can see its power.
Monday, November 12, 2012
Today I’ve got a story of a client with a problem. It will be a short story but one that is more common than you might think. When we first started our engagement with the client, we recommended a physical server and dedicated disk. Much to our dismay they decided they could get the performance they needed using VMs and SAN for storage.
Their system does EXTENSIVE allocations and needless to say, they were not getting acceptable performance. For a long time I tried we argued their environment was part of the problem. They kept showing us stats that there was no bottleneck on either the VM or the SAN. Their calculation times ranged from 8 hours to 27 hrs. It should be noted between calculations the database was set to the same initial state and the same data files were being rerun. Yes there were minor changes to one or two drivers, but nothing to make that big a difference.
Finally, a wise soul in IT at the client decided to try bringing up a parallel environment with a physical server and dedicated disk. Performance improved and they were getting more consistent times, but still longer than they liked. He went one step further and got loan of some solid state drives. With them, the calculation time when down to 5-6 hours (depending on data volume) and it was consistently that. With proof of the improvement, they have implemented Solid state drives in production and maintain the 5-6 hour time.
We have debugged issues with multiple clients with SAN issues and I have come to dislike them immensely. On the other hand, while I had trepidation in the past about Solid State drives, I am a convert and think they can provide a huge performance boost for many application especially if the app does read/write and calculations.
Tuesday, October 9, 2012
I’m back from Oracle Open World and while I typically blog a lot during the conference, this year I sat back and listened more. It was a long week and I’m glad it is over. The biggest announcement for EPM was Planning on the public cloud. In Q1 2013 Oracle will be previewing (can you say Beta) their offering. There are still a lot of things being worked out, but it is an interesting proposition. I think this will also allow Oracle to fill a need that they have overlooked. Smaller companies can take advantage of a cloud implementation as It will allow companies to implement planning with less hardware and IT resources. It should also allow for more power in the planning process. It should be noted this will be Planning without EPMA as EPMA requires a Windows server and the cloud will be Linux based. There were some other changes in the works but I’m under an NDA and am not allowed to talk about them. Suffice it to say, I look forward to seeing them.
As part of my continuing series on Studio tips and tricks, I thought I would throw a few drill through tidbits your way.
First, let’s talk about how Studio (18.104.22.168 at least) handles recursive hierarchies and returning data. The easy part is if you are at level zero. Studio returns the member name. If, however, you are at an upper level and have turned allowed drill trough from that level, you get a list of the level zero members under the member you selected. What is that in English? Think of the periods dimension. If I select Q1, Studio returns to the query Jan, Feb and Mar. This is why you would need an in clause to process them.
Where Periods in ($$Periods-Value$$)
If you look at the documentation it says you need to use the format ($$Periods-Column$$) in ($$Periods-Value$$),
but I’ve found I can just use the values if I’m not joining to the dimension table. Something associated that I found is the variable names are case sensitive It took me a long time to debug an error of “Invalid character in SQL” when I typed PEriod instead of Period.
Also associated with how the list is generated and exposed by Tim Tow, is the in clause is limited to 1000 members. If your level is going to return more than 1000 members , you get an error. Something about line 1792 which is the generic of “oh crap, I have no clue what is happening”.
Next, As I’ve created custom SQL, I was VERY careful that the list of items to be returned matched the report contents list
As it turns out, I was overly cautious. It turns out whatever is in your custom SQL ignores this list completely and returns whatever is in the SQL itself.
Finally, I ran into a little oddity with copying drill through reports.I have two reports one named “JE_Detail” and another named “JE (GL Entity, GL Account)”. I can copy and paste the first one with no issues into the same folder, but the second one fails with the error
To work around this, I used the export function in 22.214.171.124. After exporting the report, I edited it in Notepad and change all occurrences of the name to “JE Glenn” and reimported it. That worked just peachy. I’ve used the export/import functionality on a number of instances now and it works really nice. In this case, it knew this was a drill through report and it allowed me to turn off associating the report to cube schemas and models as I exported it. Very nice as I wanted to modify it before activating it for users. The other times I’ve used the export and import, it worked flawlessly for individual objects or for a full export.
In the next few weeks I’ll be talking at interRel EPM roadshows in Calgary, Phoenix and L.A. This event is open to Oracle customers (sorry consultants and competitors that read my blog). These are great events and if you are interested in getting more info contact Danielle White at email@example.com.
I’ll also be speaking at Connection Point in November and the Michigan User group meeting. It will be a busy couple of months for me.
Tuesday, September 18, 2012
Today on OTN was a tread asking about wildcards in MaxL import statements. OTN Thread . I didn’t realize it was possible and up until 126.96.36.199 it was not. John Goodwin researched the new features guide and found
188.8.131.52 essbase new features readme
"Block Storage Parallel Data Load
Parallel data load refers to the concurrent loading of multiple data files into an Essbase database. When working with large data sets (for example, a set of ten 2 GB files), loading the data sources concurrently enables you to fully utilize the CPU resources and I/O channels of modern servers with multiple processors and high-performance storage subsystems.
Parallel data load uses multiple parallel pipelines on the server side, and multiple threads on the client-side, to load multiple data files concurrently, thus enabling data loads to be truly optimized to the capabilities of modern servers."
In the tech reference (for BSO cubes only) is the ability to use wild cards.
For the import statement
Specify whether the data import file(s) are local or on the server, and specify the type of import file(s).
To import from multiple files in parallel, use the wildcard characters * and/or ? in the IMP-FILE name so that all intended import files are matched.
* substitutes any number of characters, and can be used anywhere in the pattern. For example, day*.txt matches an entire set of import files ranging from day1.txt - day9.txt.
?* substitutes one occurrence of any character, and can be used anywhere in the pattern. For example, 0?-*-2011.txt matches data source files named by date, for the single-digit months (Jan to Sept).
import database Sample.Basic
data from local data_file '/nfshome/data/foo*.txt'
using local rules_file '/nfshome/data/foo.rul'
on error abort;
using max_threads INTEGER
Optionally specify a maximum number of threads to use, if this is a parallel data load.
import database Sample.Basic using max_threads 12
data from data_file '/nfshome/data/foo*.txt'
using rules_file '/nfshome/data/foo.rul'
on error write to 'nfshome/error/foo.err';
If this clause is omitted for a parallel data load, Essbase uses a number of pipelines equal to the lesser of number of files, or half the number of CPU cores.
While I have not had a chance to try this, it would have been very useful to me in the past. I’m guessing it was added for Exalytics efficiency, but we certainly reap the benefits of it.
While not really a MaxL but more of a windows tip that has been around, but most don’t know. In a path statement in a windows batch file we know if we want to have a path statement in a script (like the import) most people use something like c:\\datafile\\Sample\\mydata.txt. We have to use the \\ because MaxL uses the backslash as an escape character. Did you know you can use forward slashes instead, just like on Unix.
c:/datafiles/Sample/mydat.txt. It can make your life easier