Why I created a blog

Its been many years since I first created this blog. It has remained true to Essbase and related information over those years. Hopefully it has answered questions and given you insight over those years. I will continue to provide my observations and comments on the ever changing world of EPM. Don't be surprised if the scope of the blog changes and brings in other Hyperion topics.


Monday, November 12, 2012

Solid State to the rescue

Today I’ve got a story of a client with a problem. It will be a short story but one that is more common than you might think. When we first started our engagement with the client, we recommended a physical server and dedicated disk. Much to our dismay they decided they could get the performance they needed using VMs and SAN for storage.

Their system does EXTENSIVE allocations and needless to say, they were not getting acceptable performance. For a long time I tried we argued their environment was part of the problem. They kept showing us stats that there was no bottleneck on either the VM or the SAN. Their calculation times ranged from 8 hours to 27 hrs. It should be noted between calculations the database was set to the same initial state and the same data files were being rerun. Yes there were minor changes to one or two drivers, but nothing to make that big a difference.

Finally, a wise soul in IT at the client decided to try bringing up a parallel environment with a physical server and dedicated disk. Performance improved and they were getting more consistent times, but still longer than they liked. He went one step further and got loan of some solid state drives.  With them, the calculation time when down to 5-6 hours (depending on data volume) and it was consistently that. With proof of the improvement, they have implemented Solid state drives in production and maintain the 5-6 hour time.

We have debugged issues with multiple clients with SAN issues and I have come to dislike them immensely. On the other hand, while I had trepidation in the past about Solid State drives, I am a convert and think they can provide a huge performance boost for many application especially if the app does read/write and calculations. 

Tuesday, October 9, 2012

Studio Drill Through Tips

I’m back from Oracle Open World and while I typically blog a lot during the conference, this year I sat back and listened more. It was a long week and I’m glad it is over. The biggest announcement for EPM was Planning on the public cloud. In Q1 2013 Oracle will be previewing (can you say Beta) their offering. There are still a lot of things being worked out, but it is an interesting proposition.  I think this will also allow Oracle to fill a need that they have overlooked. Smaller companies can take advantage of a cloud implementation as It will allow companies to implement planning with less hardware and IT resources. It should also allow for more power in the planning process. It should be noted this will be Planning without EPMA as EPMA requires a Windows server and the cloud will be Linux based.  There were some other changes in the works but I’m under an NDA and am not allowed to talk about them. Suffice it to say, I look forward to seeing them.

As part of my continuing series on Studio tips and tricks, I thought I would throw a few drill through tidbits your way.

First, let’s talk about how Studio (11.1.2.2 at least) handles recursive hierarchies and returning data. The easy part is if you are at level zero. Studio returns the member name. If, however, you are at an upper level and have turned allowed drill trough from that level, you get a list of the level zero members under the member you selected. What is that in English?  Think of the periods dimension. If I select Q1, Studio returns to the query Jan, Feb and Mar. This is why you would need an in clause to process them.

Where Periods in ($$Periods-Value$$)

If you look at the documentation it says you need to use the format ($$Periods-Column$$) in ($$Periods-Value$$),

but I’ve found I can just use the values if I’m not joining to the dimension table. Something associated that I found is the variable names are case sensitive It took me a long time to debug an error of “Invalid character in SQL”  when I typed PEriod instead of Period.

Also associated with how the list is generated and exposed by Tim Tow, is the in clause is limited to 1000 members. If your level is going to return more than 1000 members , you get an error. Something about line 1792 which is the generic of “oh crap, I have no clue what is happening”. 

Next, As I’ve created custom SQL, I was VERY careful that the list of items to be returned matched the report contents list

image

As it turns out, I was overly cautious. It turns out whatever is in your custom SQL ignores this list completely and returns whatever is in the SQL itself.

Finally, I ran into a little oddity with copying drill through reports.I have two reports one named “JE_Detail” and another named “JE (GL Entity, GL Account)”. I can copy and paste the first one with no issues into the same folder, but the second one fails with the error

image

To work around this, I used the export function in 11.1.2.2. After exporting the report, I edited it in Notepad and change all occurrences of the name to “JE Glenn” and reimported it.  That worked just peachy. I’ve used the export/import functionality on a number of instances now and it works really nice. In this case, it knew this was a drill through report and it allowed me to turn off associating the report to cube schemas and models as I exported it. Very nice as I wanted to modify it before activating it for users. The other times I’ve used the export and import, it worked flawlessly for individual objects or for a full export.

In the next few weeks I’ll be talking at interRel EPM roadshows in Calgary, Phoenix and L.A.  This event is open to Oracle customers (sorry consultants and competitors that read my blog). These are great events and if you are interested in getting more info contact Danielle White at dwhite@interrel.com.

I’ll also be speaking at Connection Point in November and the Michigan User group meeting.  It will be a busy couple of months for me.

Tuesday, September 18, 2012

A little surprise in 11.1.2.2 data loading

Today on OTN was a tread asking about wildcards in MaxL import statements. OTN Thread . I didn’t realize it was possible and up until 11.1.2.2 it was not. John Goodwin researched the new features guide and found

11.1.2.2 essbase new features readme
"Block Storage Parallel Data Load
Parallel data load refers to the concurrent loading of multiple data files into an Essbase database. When working with large data sets (for example, a set of ten 2 GB files), loading the data sources concurrently enables you to fully utilize the CPU resources and I/O channels of modern servers with multiple processors and high-performance storage subsystems.
Parallel data load uses multiple parallel pipelines on the server side, and multiple threads on the client-side, to load multiple data files concurrently, thus enabling data loads to be truly optimized to the capabilities of modern servers."

In the tech reference (for BSO cubes only) is the ability to use wild cards.

For the import statement

Specify whether the data import file(s) are local or on the server, and specify the type of import file(s).

To import from multiple files in parallel, use the wildcard characters * and/or ? in the IMP-FILE name so that all intended import files are matched.

  • * substitutes any number of characters, and can be used anywhere in the pattern. For example, day*.txt matches an entire set of import files ranging from day1.txt - day9.txt.

  • ?* substitutes one occurrence of any character, and can be used anywhere in the pattern. For example, 0?-*-2011.txt matches data source files named by date, for the single-digit months (Jan to Sept).

Example:

import database Sample.Basic
data from local data_file '/nfshome/data/foo*.txt'
using local rules_file '/nfshome/data/foo.rul'
on error abort;


and for



using max_threads INTEGER



Optionally specify a maximum number of threads to use, if this is a parallel data load.



Example:



import database Sample.Basic using max_threads 12
data from data_file '/nfshome/data/foo*.txt'
using rules_file '/nfshome/data/foo.rul'
on error write to 'nfshome/error/foo.err';


If this clause is omitted for a parallel data load, Essbase uses a number of pipelines equal to the lesser of number of files, or half the number of CPU cores.



While I have not had a chance to try this, it would have been very useful to me in the past. I’m guessing it was added for Exalytics efficiency, but we certainly reap the benefits of it.



Another tidbit



While not really a MaxL but more of a windows tip that has been around, but most don’t know. In a path statement in a windows batch file we know if we want to have a path statement in a script (like the import) most people use something like c:\\datafile\\Sample\\mydata.txt. We have to use the \\ because MaxL uses the backslash as an escape character. Did you know you can use forward slashes instead, just like on Unix.



c:/datafiles/Sample/mydat.txt. It can make your life easier

Wednesday, August 29, 2012

An interesting SQL Interface issue

It seems to be my time to find bugs. First a little background for you. I’m working on Essbase and Studio 11.1.2.2 .  I’m on a project using Essbase Studio to talk to EBS to pull GL data and drill through data. We have a view that uses a function that was written that passes the date off of a table to do a currency conversion. The query we are running pulls from a view that used the function. When we ran the Query in toad, the data we expected was returned. When it was run from an Essbase Studio Data Load SQL the wrong numbers were being returned for currency converted values but not the local (Functional) currency values.

There were a number of things tried to narrow down the problem. First, we cast the output as a Varchar2 to make sure there was no issue with numbers(This is a common problem and I actually had to do that on a drill through report to get the numbers right). I tried pulling the unconverted amount and it was correct. I tried to pull the currency rate and it was coming back wrong. Hmmm, a starting point to the problem.  Looking at the Function, it converted the amount based on the transaction date that was being passed to the function. If the date was invalid or missing, it went into an exception routine that picked up the prior month rate. This was the rate we were getting back, so it was evident that we were going into the exception routine.

To test further, I narrowed down the SQL with a where clause to process a single row of data and ran it in both Toad and from a load rule. Again differences. We added the date field to both queries and noticed the format of the date was returned differently from Toad and Essbase. Upon further investigation, it seemed the date format of the date field being passed to the function changed format if it was being called from Essbase even though the specific query itself did not reference the column at all, it was just used in the view itself. The data processing within Oracle was getting the wrong date format.

We worked around it by modifying the date passed to the query to be in a specific date format, but followed up with Oracle support with an SR.   Oracle was able to replicate the issue and has created as bug for this. (or course the answer was it will be fixed in a future version).   I’ve not tried this on any other version, so I don’t know how long the bug has been around. Oracle certainly didn’t know about it. I’m posting this in hope it might save one or more of you the time I spent trying to debug this issue.

As a side note, I got an email today from ODTUG reminding everyone that session submissions for KScope13 in New Orleans are due by 10/15/12. It seems like the conference just ended and already I have to think of new content for next year. If you have interesting information to share, submit an abstract yourself. If selected you get a free pass to the conference. If you don’t want to submit one yourself but have an interesting idea for a session or a specific topic you want discussed, let me know and I’ll consider submitting it myself(of get someone who know about the topic to).  Why do I bring this up in this post, because I’m thinking about doing a tips and tricks for Essbase Studio presentation and want to get your feedback if you would like a session like that.  If you want tips for getting an abstract accepted, have a look at the ODTUG Content page

I think my next post will be on recursive Drill through tables and interesting things I’ve discovered working with them. Stay tuned.

 

Tuesday, August 28, 2012

Essbase Studio Recursive build with Attributes

I’ve used Essbase Studio a little as many of you might know and I like the tool, but once in a while I get perplexed on how to do something I think should be simple. The other day I had a rush requirement to build an attribute dimension based on a table the used a recursive hierarchy in an ASO cube I was developing for a client. Pretty simple I thought, I’ll just add it in like I normally do and it will just work. I should know better. I spent a day and a half trying to figure out what was going on. To start out I had defined my Hierarchy as:

Parent

 
 

Child

ISICP

 
 

Child

and in 11.1.2.2 it gave me the very nice warning that it could affect existing Essbase Models. I bravely went on and after doing a resync of the hierarchy (god I love that the Studio team added that) , I went into my Essbase Model and added the Attribute dimension. I then tried to deploy the cube. What I got back was the error:

Failed to deploy Essbase cube.

Caused by: Cannot end incremental build. Essbase Error(1060053): Outline has errors

\\Outline verification errors:

\\Attribute Dimension ISISP is not associated to the base dimension

\\Record #2 - Member name (BalanceSheet) already used

BalanceSheet Assets

\\Record #4 - Member name (CurAsset) already used

CurAsset CashMktSec

\\Record #6 - Member name (TotCash) already used

TotCash CashPetty

\\Record #7 - Member name (TotCash) already used

TotCash CashatBank

In variations of testing, I also got:

Failed to deploy Essbase cube.

Caused by: Failed to build Essbase cube dimension: (Scenario) .

Caused by: Cannot end stream build. Essbase Error(1007083): Dimension build failed. Error code [1060246]. Check the server log file and the dimension build error file for possible additional info.

\\Error Updating Dimension Acct_W_ICP

\\Error Initializing Rule File Information

At this point I was befuddled, I spent a few more hours making sure the view I was using only had the Attribute value on the level zero child members and tried variations creating the hierarchy different ways. I tried filters to limit the members, an outer join to another view that just had the level zero members, Adding user created members as the parent values all with no luck. If I took out the attribute Hierarchy, the build worked fine.

With deadlines looming and no hope in sight, I opened an SR with Oracle to see if I could get some help. I was able to get in touch with Lyudmila the QA manager for the Essbase Studio team. Her help was invaluable and I owe her a debt of gratitude. She was able to replicate my issue based on info I sent her. The first issue she was able to determine was not with my problem dimension but with my Scenario dimension. Remember the error:

Caused by: Failed to build Essbase cube dimension: (Scenario) .

Caused by: Cannot end stream build. Essbase Error(1007083): Dimension build failed. Error code [1060246]. Check the server log file and the dimension build error file for possible additional info.

\\Error Updating Dimension Acct_W_ICP

My Scenario dimension was made up of a single user defined member “Actual”. This is a place holder for later. It turns out with this combination the scenario dimension followed by a dimension building attributes, you get errors. She was nice enough to open a bug report on this.

But I was still having problems and she was not. Hmm, I started to look at the screen shots she was sending me of the cube she built and realized that she built a BSO cube and I was working on an ASO cube. I was trying to build the attribute on the Accounts dimension. Essbase Studio was nice enough to let me know you are not allowed to have attributes on the compression dimension, so I had turned that off. The error I was getting looked like:

Failed to deploy Essbase cube.

Caused by: Cannot end incremental build. Essbase Error(1060053): Outline has errors

\\Outline verification errors:

\\Attribute Dimension ISICP is not associated to the base dimension

\\Error Associating Dimension "ISICP" to "Acct_W_ICP"

\\Error Initializing Rule File Information

\\Record #8476 - Error in association transaction [410100.10.COGSDir] to [N] (3362)

410100.10.COGSDir N

\\Record #8477 - Error in association transaction [410100.30.COGSDir] to [N] (3362)

410100.30.COGSDir N

\\Record #8478 - Error in association transaction [410100.50.COGSDir] to [N] (3362)

I started looking through the documentation and found a single line in the comparison of ASO to BSO that said you are not allowed to have attributes on the dimension tagged as accounts. Bummer. Not a Studio, but is is making me revisit how I build the cube.

I’m in the process of converting the Accounts dim to not be tagged as accounts, take my Time balance tags and change them into UDAs and use the code from Gary Crisci’s blog entry to add the required formulas in my view dimension to add time balance functionality.

In the near future I’ll also have a blog entry on some interesting things I’ve seen on drill through with recursive hierarchies.

Wednesday, August 8, 2012

Book Review, New Oracle Aces and A changing on the Hyperion SIG

For those who read my blog, here is the directors cut of a review I wrote for the ODTUG Journal coming out soon. You get the first look at it and the extended features(Just a little extra text)

When I started out in the Essbase world; aside from the database administrators guide (DBAG) there were virtually no other books on Essbase. Since then, there have been a number of books produced on the topic. I have had the pleasure and agony of reviewing some of these Hyperion books. The latest book to be released is Developing Essbase Applications: Advanced Techniques for Finance and IT Professionals edited by Cameron Lackpour and written by Cameron and twelve other authors many of whom are well known in the Essbase community.

 

clip_image002

In every review I have written, I have given a disclaimer: this review will be no exception. I work for a company (interRel Consulting) that has written eight books on Hyperion products. I have edited a few of them and I’m the author of one on Essbase Studio. I am an advocate for the professionals in the Hyperion field and if you have read any of my prior reviews realize I don’t hold back or sugar coat what I think. I am friends with many of the authors of this book and acquaintances of others. When I agreed to do the review of the book, it was with the understanding that I would to be blunt and honest. With all this, I was still asked to do it.

With so many books in what many consider a niche software space, is this book necessary? I think so. I love the Look Smarter books and recommend them to people who are starting out in Essbase. Once you have mastered building a cube, you need more in depth information. This book supplies that extra information but, in most cases, not going into step-by-step detail of how to accomplish it.

I won’t keep the authors in suspense; overall, I really liked this book and it more than fulfills its purpose. The topics are intermediate to advanced and widely varied. While I didn’t always agree with what the chapter author was saying, I understood their rationale behind it enough to accept what they wrote. Almost every chapter had information that would be of interest to an intermediate to advanced developer.

This book is written by 13 different authors, and it is like reading 13 different stories which is good and bad. The writing styles of the authors differ greatly and in some cases it was enjoyable and others it was not. I don’t recommend trying to read this book cover to cover. Find the section(s) you are interested in and read them, then read the other chapters at your leisure to see what tidbits you may have missed over the years.

First, if for nothing else, buy this book for the chapter “How ASO Works and How to Design for Performance.” The content is amazing. I’ll admit, I’ve read the chapter twice now and still don’t understand everything I’m being told. For some this will be overkill, but you can read the highlights and bypass details when your eyes glaze over. This is truly an advanced chapter and opens up the internals of ASO like nowhere else I’ve seen. I’ve already used some of what I’ve read to tune some ASO databases I created.

Ok, no book (especially mine) is perfect and this book is no exception: the chapter on Advanced Smart View wasn’t as advanced as the title might lead you to believe. I felt it was more basic than advanced and did not incorporate the Smart View toolkit as it should have. This chapter could have been the definitive advanced Smart View guide; alas it was far from it.

Now that I have discussed the extremes, the rest of the book is pretty solid. I don’t have the space to discuss everything, so I’ve selected a few tidbits. Most chapters had niggling annoyances that prevented them from being great. For example; I liked the chapter on preventing bad data. It gives a good perspective on how to prevent it, but the chapter is a bit verbose and rambling.

The chapter on the JAPI was good; It was aimed more at beginners using it. I think this is a good idea since intermediate Essbase developers may not have any experience with Java development. A description of development environments and how to include the Essbase classes would have completed it for me. I found the chapter on infrastructure hard to read, maybe because I’m not an infrastructure guy; it was more tables and lists than a description of what you need to do to install or pitfalls you might encounter.

Note, these comments will not come as surprises to the authors: I have discussed it with them.

I enjoyed (well enjoyed is the wrong word), I appreciated, ah better, the chapter on managing a project. There is a good discussion on things a project manager running their first project should be mindful of. The chapter on Groovy was interesting, but I’m still not convinced that Groovy is the way to go.(sorry Joe). Perhaps I’m too square to be groovy.

In summary, I believe once you have gotten past the basics of trying to figure out how to build your first cubes, you should purchase this book, read it, put it on your shelf or Kindle as a reference and thank the authors for their hard work. The book is truly worth the time to read it!

Talking about Cameron Lackpour, the author of Editor if the book, I would be remiss if I did not congratulate him on becoming an Oracle Ace Director in the EPM space.  He is truly a proponent for the Hyperion community and I’m sure he will continue to support it well. It is a well deserved recognition. Another recent recipient is an Oracle Ace honor is John Booth. He is one of the few infrastructure guys out there that is willing to share his knowledge and expertise.  He has worked hard at getting cloud instances for the masses and is active answering questions and speaking. Like Cameron he is well deserving. They should be applauded for their efforts.  Remember an Oracle Ace or Ace Director recipient does not lobby for the award, but is recognized by his peers for the work he/she does in the community.

Finally, A changing of the guard is occurring. I have been on the ODTUG Hyperion SIG for a while now and it was time for me to roll off. The SIG bylaws require that board members serve no longer than a three year term and have to wait a year prior to being reelected. Each year three board members are elected with the restriction that the number of client/customers exceed the vendor/partners. Elections were recently held and three deserving individuals were elected to a three year term.

Eric Helmer
Vice President of Infrastructure IT Services, Linium
Michael LaBarge
IT lead for Hyperion, Cessna
Deanna Sunde
Senior Director, EPM Planning and Essbase Practice, Hackett Group

I’m sure they will do great expanding on what the SIG started. Give them your support and ideas on how to make the SIG better.

Wednesday, June 6, 2012

Be a part of history

There are a few things I want to tell you about today associated with both the KScope12 conference and the Hyperion SIG.

First, If you have not registered for the most awesome conference ever, time is running out. The discounts are expiring on June 9th. so sign up now. To save a little extra put in the code irc (interRel Consulting) and save an addition $100. but this is only good till the 9th, so don’t delay.

Next for you who have already signed up for the conference. Log onto the schedule builder and you will see you have the opportunity to become an ambassador for sessions. This is an easy task, you take attendance in the room, pass out and collect session reviews and in return for doing this, you get a special reception and a little tokens of appreciation. You are going to be in the sessions anyway, so why not reap the benefits.

Finally something not directly related to the conference. The ODTUG Hyperion SIG nominations are now open. If you want to help shape the direction for Hyperion content in ODTUG, here is your opportunity. Things the SIG has done in the past are:

  • Help with content for the Conference
  • Create a quarterly newsletter with great information for the Hyperion community
  • Put on a fantastic regional user meeting in Dallas this last year.

The SIG needs motivated people to help out. Run to be on the board. For complete info go HERE.

Get involved,

Tuesday, April 10, 2012

More on KScope 12

You are probably getting tired of me writing about KScope, and in some ways I don’t blame you. But there are a few things I wanted to let you know about.

First, I wrote a blog entry for ODTUG about the Essbase Beginner’s track and I wanted to remind you all about it. Rather than repeat it here, you can view the entry  Here.

Second, I warned you. The J.W. Marriott is full, not to worry, KScope has set up an alternate hotel (another Marriott) at the riverwalk. There will be shuttles to and from the conference, so not to worry.

Third (and fourth), for the more experienced KScope attendees, there are two programs I want to let you know about: the mentor program and ambassadors. Mentors are seasoned Kscope attendees who volunteer to get paired up with first time attendees to help them get into the flow of thing, choose sessions, offer support, etc. While it does not take a lot of time, it really helps the new attendees. To sign up to be a mentor, just go the the KScope registration page , sign in and update your profile saying you want to be a mentor. Conversely, if you are new, and want to be mentored, you can sign up for that as well there.

Next is the ambassador program something to help out at the conference. Again, it does not take a lot of time or energy. An Ambassador helps the speaker with things like passing out evaluations (Yes we are going back to paper evaluations this year), counts number of attendees, keeps the speaker on time (A hard task for my ambassadors) and collects the evals. Since you will be attending the sessions anyway, it's a no brainer to be the ambassador for a few of them. In years past, you got nifty pins and fun stuff for being an ambassador as well as a special reception. Sign ups to be an ambassador will begin next month, so keep an eye out for the announcement. You can actually go onto your schedule and sign up there.

I am anxiously awaiting another new book to review. This time for experienced Hyperion users. As soon as I have had a chance to get my dirty little mitts on it, I’ll let you know what I think. It you want to know more about it, you can see it at Cameron Lackpour's blog

Wednesday, March 28, 2012

KScope Hotel filling up

Don't say I haven't warned you in the past. I've heard the J.W. Marriott in San Antonio for the KScope12 conference is almost full.  This means a few things:
1. The conference is growing and will be better than ever
2. If you don't book soon, you may need to bring a sleeping bag and tent. Not really, I don't think the Marriott would appreciate you pitching a tent on the 16th green.
3. There are other hotels in San Antonio, but not near the Mariott. You would have to commute a minimum of 4 miles.

If you don't want to miss out on the fun, I suggest you register A.S.A.P. for what we expect to be the biggest and best KScope yet.  I'm not calling you a sap, but you might be if you don't attend.

Also remember to tell your beginner friends we will have an Essbase Beginners track this year with amazing content to get even the rawest greenhorn started. (see I worked in cowboy talk since the conference is in cowboy country).
One last thing, even though the earlybird discount is over, you can still save a few bucks using the discount code IRC (interRel Consulting).  Don't wait do it now or you will regret it later.

Book review: Oracle Essbase 11 Development Cookbook

I was recently sent a copy of the book Oracle Essbase 11 Development Cookbook by Jose Ruiz and published by Packt publishing.  Before I give you my review, I need to have disclaimer. First, the book was sent to me free to review it. Second, I work for a consulting firm that has competing books(Although from the intended audience, that should not be the case). Third, I did not read the entire book as this type of book is used more as a reference than a step by step guide. Fourth.I am probably not the intended audience for the book as I believe my capabilities are higher than what the book has to offer. Finally, I’m not a big fan of the cookbook style of book.
I will say I was excited to get the book. There are so few books out there for Hyperion and fewer for intermediate to advanced developers. The book says it is for experienced developers and users. I think the book is more for beginner to low intermediate users. The book offers 90 recipes  for various topics from setting up a relational repository for Essbase, Using Essbase Studio for building an Essbase Model, Using EAS to build an Essbase model (Load rules, dim build rules, Calc scripts), Partitions, Security, Reporting and more.
I want to start out on a positive note, There are a lot of things to like about this book. If you are trying to do something you have not tried before, it gives you an example of how you can do it.  If you want to vary from what or how the book does it, you are on your own. Next, the book covers a wide range of topics so you could put this book on your shelf as a reference. Finally, you can download the examples in the book so you can test them yourself. This is nice. Since I got this book as a preview copy, I didn’t get the downloadable materials.  I know how hard it is to write a book and I give kudos to Jose for his work. It is evident that Jose has a broad knowledge of the products and was able to put together some nice examples of things developers do. 
Perhaps I’m jaded, but there were a number of things I did not like about the book. The biggest one for me was a statement included in the book about the files you can download for the book containing a paper by Gary Crisci. While he is given credit as author for the collateral, I conversed with him and he was never approached to give consent to have his work included. As an author and speaker, I know I would not want my work included without my permission.
There are other annoyances in the book, some big some not so big. The following are little things followed by some bigger things. In chapter 1, the DDL (Data Definition language) for creating a table in SQL server tells you to create the child and description columns as VarChar(85). These are supposed to be used in building the Essbase members and aliases. But Most of us know the maximum length Essbase allows for these is 80 characters.
Next if you read through all of the recipes in chapter 1, all but one have you create the same database over and over again. If you follow the directions, after the first time, you will get errors as the object already exists. Similarly, In another chapter, you are asked to create Sample.Basic and ASOSamp.Basic databases. These databases ship with Essbase and there is no mention that you should delete them in order to get the examples to work.
The book seems to be inconsistent in the depth of instructions, is some cases, it assumes you will click buttons (like to save a selection) before going on, but it doesn’t tell you to do so. In other cases you are given detail instructions (like logging onto EAS). On the plus side, (for example partitions) Jose goes into different ways to enter the partition areas.
One of the reasons I don’t like cookbook type of books is that they tell you how to do specific things, but don’t go into derivations or detailed explanation of why you did what you did. This book is not much different, it tells you what you  are about to do, gives you the keystrokes to do it, then tells you what you did, but not why or what it really means to you. I realize that to do this would take a much larger book, but personally I want to know why I ‘m doing something. While there are 90 recipes, the book tries to cover a lot of topics. I understand to do all of the topics justice, you would need a book logarithmically bigger.It at least gives you some basic examples in all of the areas it covers.
So the big questions, would I buy the book, The answer for me is no, As I said for me it would not add value. This plus the issue of included material would turn me off. But, for a less experienced developer,it could be a good reference.

As you can see I am a critical reviewer as I believe real, honest revews are necessary for an author to improve. That said, I've not had anyone review my book. If you would like to give feedback, either post it here as a comment or go to Amazon.com or Lulu.com and review the book there. I welcome all feedback positive or negative

Tuesday, February 28, 2012

Essbase 11.1.2.2 Documentation

While I am not the first to notice this, the 11.1.2.2 documentation is now on the web. For how long I don’t know as it was pulled once before. You can get to it from http://docs.oracle.com/cd/E26232_01/index.htm

With the documentation out, the release can’t be far behind. I’ll be looking through it and highlighting some of the cool new items soon.

Update, I found out yesterday the software for Essbase is available as a patch. Go to Support.Oracle.com, log in and go to patches and updates.

Search on the product family Hyperion Essbase and version 11.1.2.1 and you will see

image

I almost have the changes for the Essbase Studio book done, so it will be available for 11.1.2.2 soon as well.

I find it interesting that they just released the Essbase/Essbase Studio and supporting items (shared Services, EAS) but not Planning, FHM or other products all together. I think that This was holding up the release of Exalytics so they released this now. OBI was released last week so everything is in place for Exalogic.

Stay tuned

Friday, February 24, 2012

A success story

I just got back from the first ODTUG Hyperion SIG User Group Conference and am happy to report it was a roaring success.  There was great attendance and aside from mine fantastic sessions. Floyd Conrad  from Oracle gave a great keynote, I won’t repeat what he said here for two reasons. First there was a safe harbor statement in it and second, I actually missed it. I was doing a webcast at the time.

The conference got great support from Applied OLAP, US Analytics, Linium, and interRel. 4 different Oracle Ace directors, Tim Tow, Eric Helmer, Edward Roske were there presenting as well as others. On top of that the refreshments were really yummy  and give-a-ways were good. Unfortunately, I didn’t win the free pass to KScope12.

Below is Alice Lawrence Hyperion SIG president and conference chair beaming about the successful birth of this baby. She should be very proud of this first SIG sponsored event.

image

I wish I took more pictures, but completely forgot until it was too late.

I look forward to the next event. Unfortunately, the March meeting in Atlanta had to be postponed till a later time. I’ll keep you posted when I know more about when it will be. 

Tuesday, February 14, 2012

Hyperion Sig Mini-conference reminder

I don’t want you to miss out as there is still time to register for the ODTUG Hyperion SIG mini-conference in Arlington, Tx  on Thursday, February 23th. It will be at Ranger stadium and promises to be a great event.  The agenda and registration is available at HERE,  The keynote  is being given by Floyd Conrad  from Oracle and is entitled The Future of Oracle EPM and Beyond. There are at least 4 Oracle Ace directors that will be there giving sessions. Where else can you get a keynote and sessions like that for FREE!

I’ve heard serious rumor that there will be a similar event in Atlanta on March 20th, but it is not confirmed yet. I’ll get you details once I know more.

Monday, January 16, 2012

Upcoming Opportunities

There are a lot of things going on and I thought I would let you know about them. First, I’ll be speaking at a Hyperion Solutions Roadshow  with the ever amusing Edward Roske on Jan 24th in Denver. I’m also excited that Toufic Wakim will be giving the keynote. Aside from being a heck of a nice guy and a great speaker, he actually knows what he is talking about and can give great insight into the Hyperion products.  It is at the Hyatt Regency downtown. To see the sessions click here To register for it click here (note: you must register with a company email address)

Second, In on February 14th at 12 noon (Easter time), I will be in a loving mood and will be presenting a webcast for both interRel and ODTUG.It is titled is MDX Practical Examples.  Note, it will be repeated as an interRel webcast on Thursday Feb 16th. The abstract for it is:  

MDX is the direction of the future, but how do you actually code in it? Join Oracle Ace Director Glenn Schwartzberg as he walks through some of the basics of using  MDX in Essbase. This session is peppered with real world examples of how you do multiple cross joins (and why), the syntax for getting descendants, the first child and what you can do in MDX that you can’t do  in Calc scripts. The list goes on and on. This session is a must attend for those who are getting started with ASO cubes and MDX and provides tips and tricks for those who have already been using MDX. To register for the webcast, go here.

Next, If you are in the Dallas Area, or plan to be there on Thursday February 23rd, the ODTUG Hyperion SIG is putting on a half day mini-conference at the ballpark in Arlington. While the agenda is not set yet, you can reserve a spot by going here. There is supposed to be a sister event in Atlanta. I believe the date will be Friday, February 17th, but I don’t have information on it to share with you yet. When I do, I‘ll post another blog entry.

I’m not sure where I’ll be or what I’ll be presenting in March, but in April, I am doing a couple of sessions at the Collaborate conference. in Las Vegas and of course, in June, I’ll be presenting at the Kscope 12 conference. I’ll put more information out on those later.

Of course, while I’m not doing most of them, the interRel webcast series is in full swing with a series on HFM, FDM and the related components. To get a list of them, contact Danielle White at dwhite@interrel.com