Roads ? - where we're going, we don't need roads ...
ORA600 | Last resort Oracle recovery
ORA600 content Content RSS
Oracle ORA600 News RRSS Oracle News RSS
ORA600 blog Blog RSS
ORA600 blog Blog Atom

User login

Oaktable Site

Long overdue … and by now old news !

Seems everybody has already mentioned it on their blog - I reckon so can I ;-)

It seems the topic of 'a' new website had already been discussed during several Oracle events but there were no immediate plans to actually implement it.
So when a mail requesting volunteers popped up, Doug Burns made sure to mention my name.

A quick ask-around revealed that the #1 must-have-feature was an oakie blog aggregator. 
James Morle also had mentioned to scrap the old site and redo the whole thing in an open-source CMS, preferable Drupal. Which was fine by me as I had already done some work in drupal, including some custom drupal modules (like a drupal module to integrate user management with Oracle SSO/OID).
Anyway - as the only real requirement was a blog aggregator of Oaktable member's blogs and then some, I just started working on it in a kind of agile way ;-) 

Here are some of the modules I used :
  • must-have modules : admin_menu, CCK (and several cck fields like email, date, imagefield), panels for the frontpage and you can't do without the views module
  • Imageapi, imagecache : needed for some on-the-fly rescaling of uploaded images, like for example member's avatars or book covers
  • Feedapi and feedapi_mapper : I prefer these modules for creating a blog aggregator instead of the standard aggregator module as it allows you to save feed content in a node, and thus making the content searchable and easier for google to digest. Also, you can map tags within feeds to node tags so they show up in your tag-cloud (double-bonus!)
  • Cumulus and tagadelic : for creating a funky tag-cloud
  • Twitter and juitter modules : for aggregating twitter feeds and showing twitter trends. I will add the twitter trends for all major events as I did for HOTSOS2010 and MOW2010.
  • Browscap and mobile_theme : if you use a mobile device, the site will switch to the iwebkit theme… looks ok on an iphone Wink
  • Content_slider : which allows you to create a kind of slide show from node content (for example the events and books slides)
  • Google_analytics, google_cse, pathauto, seo_checklist : make sure the whole thing is SEO 
  • Subscriptions : authenticated users can subscribe to content - they will be notified when content is added or modified
  • Faq and faq_ask : I've used these to implement the oaktable challenge. It's not ideally as it does not allow me to migrate the old questions but it does the job.

I've also used some modules for oakie's only - like :
  • Announcements : add announcements to the site encouraging oakies to post content or start their own blog on the oaktable site… the site does not only aggregate but also allows members to write blogposts that are automatically included in the sites blog RSS feeds
  • Votingapi and advpoll : providing democracy to the oaktable when choosing when or where to eat during the next Oracle event using polls
  • Webform and webform_report : allows you to easily create html forms without programming and report on the results - already came in useful for ordering the new oaktable polo shirt Cool

As you can see - these are all basic modules. There was some custom work though :
  • I have a 'oakie' module : this provides a mapping between the user profile and feedapi, so users can add their blog rss feed through their profile instead of the more complex feedapi form
  • I had to tweak ask_faq for the oaktable challenge
  • By far the most difficult task was tweaking the themes 
    • 'admire_gray' is the main theme - I had some issues on smaller resolutions so I had to make some modifications in its stylesheet
    • 'iwebkit' - the mobile theme had some real bugs in it as it did not show pagers - had to dive in the theme code here to fix it !

I migrated some old articles and scripts from the old site, leaving the rest to the original authors (as you can see … that didn't work out so well) - however, Jonathan Lewis, Tanel Poder and Christian Antognini have already added some new articles which gives me hope Wink

Anyways - I hope you like it. 
 

Expert Oracle Practices

I was just woken up by the mailman, ringing my doorbell several times.
I jumped out of bed and opened the door half naked and still sleep drunk - I was on call this week and although there weren't many calls I went to bed at 2AM, each day. If there's one thing I hate is going to bed and getting a call an hour later - so I go to bed late at night and hope I don't get a call until 8AM when a colleague takes over.
Anyway, I knew it was the postman delivering the new Oaktable book 'Expert Oracle Practices'. 
Because I had bought it at Amazon in the US, I knew I had to pay import taxes (10euro) and thus needed personal delivery. Hey, I wanted to have it asap Wink

This is one of those books you just need to have !
I've already mentioned here that I love going to the UKOUG - and one of the main reasons is because I love to hear how other DBA's/developers do their work. How do they backup their databases, how do they tune their databases, what problems are they having.
The new Oaktable book is very much like that - how do some of the most respected database consultants handle their environments. It's not a low, lower, loooower, Steve Adams low level book. But at first glance it looks more technical than the first Oaktable book (Oracle Insights - Tales of the Oaktable)  
And that was also the aim - competing with the other new Oracle 11g books out there.
I'm ashamed to admit that I turned down an offer to write a chapter (even two chapters). I was offered a chapter on backup and recovery and one on NLS issues.
The first one is obvious as I have some experiences with recovery Wink - and I could have written some funny anecdotes and picked out some cases that went bad and why.
But the aim wasn't to write funny stories but technical content.
I have a confession to make - I hate writing technical content/documentation - but I love programming.
If you want to punish me - get me to write technical docs. One of the reasons I quit my previous job was because at one point, they had me writing more technical docs to be ISO9000/1/2 compliant than actually programming stuff. I don't know why I agreed on 2 chapters but with the words of a famous entrepreneur 'screw it, let's do it', I set myself a goal :
  • try to write 10% of one chapter in one week
Mind you this was the beginning of the summer, the worst possible timing for me, as I had filled up most of the summer weekends with other things to do. 
For the backup and recovery chapter, I already had some ideas on how to handle the chapter - I wanted to do a bottom up approach.
Most books do a top down approach - they explain the concept, the wider picture, go to the commands and then some examples, and maybe some internals.
I seem to always learn things bottom up - start with the internals and then work my way up to the commands. It's like a bottom up and top down parser Wink. If you understand the internals, the mechanics, you know what you're doing and thus, don't need to memorize all the commands ... you just need to understand to know what's possible!
Anyway the idea was to first explain block internals, how data is stored and how redo logging works, using a chessboard.
Let's say 2 guys are playing 10 chess games on 10 boards at the same time in the outdoors. They each move pieces, the movement  of the pieces get logged by a third person on a piece of paper and once in a while a 4th guy writes down the location of every chess piece on each board to another piece of paper. 
The 2 guys playing the chess games are database processes changing rowpieces (chess pieces) in the datablocks (chess board) - the guy writing down the movement of the pieces (redo vectors) is the logwriter process. They guy writing down the complete chessboards to paper is the dbwr process. If it starts to rain, they just scrap the boards (shutdown abort). When they start playing again, they get the paper containing the complete chessboard layouts and the paper with the chess piece movements and redo the movements (instance recovery). It is basic journaling !
With this story in mind I could have explained backup and recovery in a very simple way so everybody understands, and then move my way up.
Anyways - after a week of writing in the evenings I had one page ! One page and I was so bored !!! I'm definitely not a technical writer … a blog now and then…ok. A presentation once a year … fine. But 2 chapters in a book … I don't think so.
So I got back to Jonathan Gennick after a week - I didn't make my goal - I could not see, with the deadlines in place, how I could find time (I have 2 jobs and need to sleep now and then)  to write those two chapters and I thought it would be best to let him know before I signed a contract. And you know what, I have not regret it since !
Btw - about the second chapter, NLS issues - I live in Belgium. We have 3 official languages : Dutch, French and German. And we use English to keep everybody happy. So we know allllll about NLS issues Cool
Now - go buy this book - it's awesome !

On a different note …. I do have more time in the winter season and I was able to qualify for the Flemish Poker championship or 'Pokerkampioen Van Vlaanderen'. It's a freeze-out tournament covered by TV. It will be aired on March on 2BE and JimTV. There were 674 entries for the regional finals.  I made it through the semi finals and busted out on place 28. I wrote about it here. It's all non-technical content ! Laughing

UKOUG2009 - I too can blog about it ...

I don't know how they do it !

When I go to a conference, I make an agenda and mostly follow it all the way through - I might make a few changes here or there, but I won’t easily skip a slot… unless an emergency comes up … which happened only twice in these 3 days Wink

That brings me to my first observation of UKOUG2009 … wifi sucked !!!

When I go to a conference I still need to be available – I *need*wifi access to survive. I only was able to get the ICC wifi to work during the parties!

The mickey mouse portables available at the exibition hall and registration area didn’t do me any good either – I couldn’t ssh out of them !

Luckily I found some unprotected wifi signals in the exhibition area – my gratitude goes out to the people who had setup the NETGEAR wifi appliance – you delivered excellent internet services although I reckon this was unintentionally as it was protected the last day of the conference Laughing

Here ‘s a way to get people to come to your booth – if you have wifi – name the network to your company, and give people the password if they visit you – success guaranteed !

Anyway – I got deferred – I was wondering how people do it –blogging and twittering during a conference while there is so much to do, so much to see, so much to process. And I reckon, a lot of people were twitteringand blogging … that probably was the cause for the bad wifi !!! Cool

So I decided to blog a bit about UKOUG2009 when I got home.

We arrived on Sunday at Jurys Inn – which is about the best hotel you can get without going to the Hyatt. Every year I seem to bring mor eand more people of the company with me – this year our company had sent 9 people ! The hotel looked a bit quiet so we decided to head to All-bar-one.Which again looked a bit deserted … it seems the credit crunch had taken it’s toll here !

So what about the sessions – I have a broad interest – the last couple of years I do more and more application server stuff, networking, identity management and programming than hardcore DBA work.

I had sessions on my agenda ranging from ‘Getting the best out of hardware loadbalancers’ to ‘Integrating forms with apex’ over ‘Authentication,SSO and authorization for WLS’ and Tanel’s ‘Latches and mutexes’ talk. And they say I am a database geek – I reckon I attend more talks on middleware thandatabases.

Now for those of you moaning about how bad some presentations were and how you could do so much better – go out there and do it. Write a apresentation and/or whitepaper and get your butt out there.

One of my colleagues attended a talk on ADF 11g by someonefrom Oracle – afterwards they had a chat and in between my colleague mentions how his team had created a kick ass app in ADF 11g. In which he got the reply ‘well– show it to me’ ! End result – if he writes a paper about it, he may present it at Oracle Openworld 2010!

So what about the sessions I attended – here are some highlights :

Alex Keh’s talk on Active Directory and Windows Security integration with Oracle databases – Alex presentation was great – excellent topic, good presentation style, working demos and I learned a couple of new things.

Joel Goodman from Oracle education had a strange presentation title – something about DB Links PART 2. I couldn’t find part 1 in the agenda. Apparently he had submitted 2 presentations but part 1 didn’t get accepted. Nonetheless – Joel Goodman knows his stuff – he also reminds me of Wolverine – deep voice, American accent … it’s almost as if he tries to hypnotize you while pumping valuable knowledge into your brain. (I wouldn’t mind having him around in a street fight ;-) ) This presentation was all about distributed transactions and how crashing databases, involved in distributed transaction,can lock complete tables until you force commit or force rollback the transactions. I had completely forgotten about that. The last time I had to do something like that was in the Oracle7 era. So 2 thumbs up ! Joel puts hispresentations free for downloading here: http://dbatrain.wordpress.com/articles-papers-and-presentations 

Joel’s partner in crime Harald Van Breederode, and equally excellent presenter talked about 11g SQL Plan Management. This is something I get confronted with quite a lot – plan stability. Don’t you hate it when a new execution plan just hogs down a complete database. In my opinion, Oracle should have introduced SQL Plan Management when they introduced the CBO ! The one can’t live without the other and yet it has taken over a decade to get this feature –my guess is it was just too resource hungry for the hardware available back in the days. I love to quote Hannibal Smith on this : ‘I love it when a plan comes together!’ Harald hasn’t posted his presentation on his blog – but who knows, one day he might! 

I’m a java guy but that doesn’t mean I can’t appreciate plsql – I like it better then say … perl – so I went to see Steven Feuerstein’s‘High Performance PL/SQL’ – very interesting stuff. He spent quite some time on ways to cache data in the pga/uga using all sorts of tricks, ending up with 11g’s result cache. Not too bad I thought – but Connor McDonald showed a little bit more skepticisms on the result cache. He proofed that if the result cache isbeing populated and concurrent sessions are running the same query which ispopulating the cache, extra locking occurs. As usual, Connor delivered one of the best, well probably most definitely *the* best presentation of UKOUG2009 inhis usual style, firing away *485* slides in 45 minutes !

I went to quite a lot of weblogic/ias presentations. It looks like people that are Oracle acquired from the acquisitions haven’t the slightest idea of what Oracle Portal/Forms/Reports are, and *how many* people are still using it. Every presentation was heavily (not heavenly) focused on java functionality, with Forms/reports dangling, almost falling of the slides ! One guy was quite hilarious – it was a talk about SSO and WLS … to be honest I was expecting something about how ‘legacy’ apps like Oracle Forms/reports/portal and Oracle SSO (OC4J_Security) would integrate with the new Weblogic App server. Instead it was a talk about Oracle’s Identity & Access Management suite build on top of WLS. No biggy – quite interesting especially from someone who co-wrote the SAML standards.

But what I suspected would happen, did happen … someone asked a question about how all this would integrate with Oracle Portal if they migrate from IAS 10.1.2.3 to Weblogic ……………….

This was going to be good as I was also interested in that. The answer was way off, it was quite clear he had no idea what Oracle Portal was or how it is deeply integrate with PLSQL (and the PLSQL SSO api) and OID. The person who asked the question tried to reframe the question but again … no serious answer –‘no, but in java … blah blah blah’ … you could see the disappointment as the person who asked the question politely nodded along and thanked the presenter for this very straightforward answer.

This confirms to me that Portal is *dead*. If you run a public website on Portal – get drupal. It’s cheap, it has way better SEO and thrives on a very large community. Oh – and I have created a drupal module that integrates drupal user management with Oracle SSO. Throw away Portal – it’s a real pain to migrate upwards (I’m stuck at Portal 10.1.2.3 as I’m unable to migrate to 10.1.4.2 even after a 2 week SR with Oracle). Throw it out… you’ll feel much better and people will like you for it Tongue out

One other hilarious moment was Julian Dyke’s presentation on 11gR2 new features. Julian makes excellent presentation/slides/animations. So for that reason alone, if I can, I tend to go to any of his talks. However, at the beginning of his presentation, he mentioned he had forgotten a word in his title : ‘RAC’ – ‘11gR2 *RAC* new features’. This resulted in quite a lot of people leaving the room. I decided to stay purely based on Julian’s reputation.However, I had to leave afterwards when I got an important phone call …(damn wifi) and I was glad I had an excuse to leave the room. I took the 11g RAC course in February of this year. When Julian started to talk about all the new RAC stuff like SCAN, I felt depressed. It was like I could throw away half thet hings I had learned in the course – ‘this is new, that’s new, that’s changed,this is obsolete, this works like that now …’ … too depressing …

 

To end this post – here’s a picture of a collegue of mine at the Fire & Ice Party (disclaimer – the girl had already left – she’s not under the table) 

Gert at UKOUG fire & ice party 

I need to lay down now after another great UKOUG conference .... till next year !!!! 

 

Cloud computing - my server is out there man !

Price is non-negotiable
 
This website is hosted on one of my servers co-located at a data centre near Brussels. It's a DELL Poweredge 850E with 340W powersupply.
I use my own dedicated server because I *sometimes* need large amount of storage. For example, if a customer has a crashed database and wants to upload his datafiles. We're speaking 100Gb or more.  
So I need large amounts of storage - not all the time - and bandwidth. The server is hooked up to a 100Mbit internet backbone so I got that one covered too !
A couple of years ago, my only option was to buy a physical server, find a hosting company that offers co-location and hire 1U rackspace.
Cloud computing has changed this game - that and the price of electricity at datacenters.
Two months ago I was thinking of buying a server to replace the old DELL server - it was 3y old and I wanted bigger disks Wink
Coincidentally, when I had configured a new 1U box on DELL's website, I got a call from my hosting company.
After hearing a whole story about how they got new racks (more current, better cooling, bla bla bla) and that I had to migrate my server, they dropped the bombshell on pricing.
Electricity prices had gone up and they had to re-negotiate prices with the datacenter.
Much to my surprise, the new price was three times the current price. Now I don't know about you - but if you have to pay 3x the price for getting the same service, something is wrong  Cool(did I mention that the year before they also upped their price with 20%)
Suddenly a power supply of 340W became extremely expensive. Did you ever had a look at those 200W servers … I'd rather run a business on a laptop than on one of those mickey-mouse servers… at least my old DELL has hardware raid and a decent cpu...
So - either they were lousy negotiators - or they wanted their customers to pay for their new - probably golden - racks.
It was a lousy sales-pitch too - the sales guy offered me to think about it and check out the competition … which were all cheaper than them ;-) 
Anyway - prices were non-negotiable - so I first checked out several competitors in the same and other datacenters.
Now with co-location - all you get is rackspace, current and network. That's it.
One hosting provider's current is not better than another's - so the only criteria are price and maybe the datacenter's reputation. (well ok - there are minor differences regarding network limits, but co-location is basically easy money for providers).
So there I was - comparing prices of different providers at different datacenters. And as price and network was my only criteria I chose to move to a small provider at the Scarlet datacenter in Vilvoorde. It's a really small and young provider/company as they only have 2 racks there (and 2 racks at another datacenter).
Now, I didn't want any downtime - that meant renting computing power,  temporarely migrating the website and all programs that allow for the generating DUDE licenses (which is a bit more complicated that running some 'make' or 'ant' scripts), switching DNS, while doing the physical transfer of the server.
 
 
The Anjo connection
 
I remembered one time I was at Anjo Kolk's place. I had bought 2 second hand Sun servers from him. He had bought 40 of them in the US and had shipped them to The Netherlands, so he could spare a server or two Tongue out
Anyway - I told him I was going to use the servers for a new project of mine, where I needed a linux cluster for running my own DNS servers. At which point he started demonstrating Amazon EC2 and how I could use cloud computing to do what I wanted to do, without the hassles of hiring extra rackspace or getting a NAS for storage.
I must say I was intrigued … but it all looked a bit fragile and over-complicated.
I mean, I just wanted some cpu, some storage and network.
With Amazon I had to choose from an aws image in the EC2 cloud - then, if I wanted persistent storage, I had to rent Amazon S3 storage. Huh persistent storage ?
It looks like Amazon's VM's are snapshots from common images and if you shutdown your VM or your VM crashes, the VM reverts to the original image, losing all your changes… unless you backed it up to your persistent S3 storage … which couldn't be mounted as a file systems but had to accessed through HTTP protocols … huh???
I just want disk space … how hard is that ?
Anyway - because of migration and uptime issues I was facing, I took a new look at Amazon EC2 and it seems they finally have mountable persistent storage, which they call Amazon EBS (or elastic block device) or what I like to call - a disk.
Still, if your VM crashes, you lose your configuration unless you back it up to your EBS or somewhere else and reconfigure after the crash (someone correct me if I'm wrong).
That's OK as I only needed a VM for a day or 2, while my own server was physically being moved to the new datacenter. So I signed up for the Amazon EC2 service. And then I waited … and waited…. It took them 2 days to activate my account… too slow.
 
 
Amazon and goliath 
 
While I was waiting for my account activation at Amazon, I decided to look at other cloud server hosting … there is only a handful and most of them are based on virtualization software like VMware, Xen and KVM. The downside is that they all offer fixed configurations like :
  • Small config  : 1 cpu core, 1Gb mem, 20Gb disk
  • Medium config : 2 cpu cure, 2Gb mem,  40Gb disk
  • Large …. You get the idea
In my opinion - cloud computing should provide flexible configurations at flexible prices.
This was only cloud computing in the sense that you pay by the hour but it's not as flexible as Amazon that you can say - I want 1 cpu core, 1Gb of memory and 200Gb of disk.
Nooo, if you wanted 200Gb of disk space, you needed to take the *extra large config* with 16 cpu cores, 8Gb memory and that happened to come with 200Gb of disk space and will cost you as much as a half a rack and 4 DELLs if you keep running it for a whole year.
But wait - I've found the one exception, that absolutely lets you choose all 3 (cpu/mem/storage) independently - and they are called Elastichosts.com, a UK based cloud provider.
And what's more - they let you try it out for free for 5 days !
And … they give you a fixed IP address unlike Amazon's elastic IP address which will make you jump to 20 burning loops before you can receive/send mail to/from your server that is not marked as spam
Elastichosts runs on Linux KVM and they have made sure that all interfaces are very simple - a disk is a disk - an static ip really is a static ip.
No fuzzy acronyms.
In fact, I was reminded of Larry Ellison's opinion of cloud computing … it's just a simple server that is 'out there' Cool  - it needs cpu, memory, network and electricity - and for me affordable flexible pricing - that's it !!!
Maybe I'm an idiot but Amazon just took it a step too far and made it too complex !
In case you've misted Larry's interview on cloud computing :


 
Epilogue
  • Anjo has been running an Amazon AWS for over a year without crashing.
  • I migrated the ORA600 server without downtime to the Scarlet datacenter which is located in the same building as the Belgian Oracle office (you've got to love the irony Laughing )
scarlet dc 
  • A friend of mine who was also co-locating a server at my original provider was offered negotiable prices a couple of weeks later (maybe tripling their prices wasn't the best of ideas). He too migrated to another datacenter.
  • If a hosting company ever pulls a trick on me again, I'm throwing away my hardware and move to elastichosts.com for good. 

ORA600 Aggregator mobile theme

I caved in ... I bought an iPhone.

I still think the price is a total rip-off here in Belgium - and there's a 6 week waiting list.

All because in Belgium force bundling sales are not allowed. This makes the iPhone simlock free but it comes at a price.

Anyway - when I looked at the news aggregator, I thought I could do better and so the aggregator now detects mobile browsers and switches to a mobile theme. The implementation uses an mobile theme, a browser detection (browscap), a themeswitcher and the iWebKit. The implementation was done in 15min all because of the drupal framework Cool. I wonder how long this would take in Apex - or god forbid Portal !

When I started this site, and needed to pick a framework, I was favoring Apex because I knew it already. And I have some experience with Oracle Portal. However, the problem we were having at a customer site with Portal (or apex) and search engine optimisation (SEO), drove me into drupal which does a great job in SEO. It did involve learning php (but I already knew perl really well) and ofcourse the drupal hook api (which is really another way of thinking and programming). But the payoff comes with all free modules that you can find on the internet and build new functionality in a matter of minutes !!!

 

 

Oracle Performance Firefighting

I've been  - and still am - very busy lately. 

However, when I saw Coskan's review of Craig Shallahamer's book, Oracle Performance Firefighting, it reminded me of the fact that I had put this topic on my blog todo list.

 

Oracle Performance Firefighting

But unlike the other topics on my list, I really wanted to get this one out first.

It's been a while since I've been so excited about a book.

Craig is the guy behind Orapub and also the author of Forecasting Oracle Performance.

I'm a bottom-up learner - like bottom-up parsers (as opposed to top-down parsers) I like to get my facts as detailed as possible and then work my way up to more general topics. For example, when I learned my backup & recovery skills, I learned about block internals first as opposed to diving in all sorts of high level commands.

This means I really appriciate books like Steve Adams' Oracle8i internals, James Morle's Scaling Oracle8i or Bach's all time classic Design of the Unix OS.

I think Craig's new book fits right in there.

I found out about the book a couple of months ago as Daniel Fink told me he was reviewing this book - I immediately ordered it on Craig's site before it was available on Amazon.

I must admit, I've only read chapters 6, 7 and 8 - covering buffer cache internals, redo internals and shared pool internals. These are 3 chapters every dba should read !!! Those 3 chapters are worth the price on their own !

So if you're buying only one Oracle book this year - buy this one !!!

And while you're buying books, get Christian Antognini's Troubleshooting Oracle Performance too ! 

 

PS - for the fans of Oracle Insights - tales of the Oaktable - TOTOT2 is on it's way !

Oracle buys sun ... time to buy some Oracle shares

 

It seems Oracle succeeded where IBM failed :

http://www.oracle.com/sun/index.html

"On April 20, 2009, Oracle announced it has entered into an agreement to acquire Sun Microsystems (Sun). The proposed transaction is subject to Sun stockholder approval, certain regulatory approvals and customary closing conditions. Until the deal closes, each company will continue to operate independently, and it is business as usual.

The acquisition combines best-in-class enterprise software and mission-critical computing systems. Oracle plans to engineer and deliver an integrated system—applications to disk—where all the pieces fit and work together so customers do not have to do it themselves. Customers benefit as their system integration costs go down while system performance, reliability and security go up.

 

Investor Conference Call
There will be a conference call to discuss the transaction at 5:30 a.m. Pacific time.

  • Investors can listen to the conference call by dialing +1.719.234.7870, passcode: 923645
  • A replay will be available for 24 hours after the call ends at +1.719.884.8882, passcode: 923645"

 

FBI internal investigations...hidden virtual columns in 8i.

I was called in last week for a performance issue at a customer site.
Let's set scene a bit …

They had moved their main production OLTP database from AIX to Windows and had been struggling with performance for a couple of weeks. Hardware-wise, they went from expensive but old Bull hardware to a Dell box (with 8 cores). And they went from 4Gb to 16Gb RAM. So you would expect the database would fly on this new hardware.
Now - this database was/is pulling around a 1000 concurrent users…. The first thing they ran into was the process' addressable memory space on 32bit windows, being 2Gb. This of course led to a lot of ORA-04030 and angry users.  Actually, Anjo recently made a blogentry about it.

So actions were made to optimize Oracle for windows, which basically consists of putting the buffercache above the 2Gb limit - anyway metalink note 231159.1 explains all of this, but still, as this is an Oracle 8.1.7.4 database handling a thousand users, all other memory (remember sort_area_size, hash_area_size etc) related parameters were set to a minimum to avoid ORA-4030's.
The result of this memory balancing operation was, that performance was below average. And I was send in to have a look at it. I must admit - I hadn't tuned a whole environment for a while, but I had done a lot of 8.0 and 7.3 migrations to 8i back in the day, and knew pretty much what the problems of the 8i CBO were. Oh - why are the still running 8i … well the database application is to be decommissioned this year and so they didn't bother to move up to the latest Oracle version. The fact that they wanted to invest in getting performance up, proves power users were about to lynch the IT manager Tongue out
Anyway - 5minutes into the meeting with IT, a second consultant from a rivaling company walks in. That's how bad performance was. I don't mind competition - in fact I didn't see it as competition. The guy was more than reasonable and during the day we acted as each-others soundboard.

The first thing I noticed were the bad response times on the disks and the large amount of physical IO the database was doing. The database running on a Dell SAN…. I didn't even know Dell produced SANs !!! :-p But I knew one thing - this SAN was sh*te. It used 8 disks in RAID5 (coming from 4 disks) and I reckon the write cache must have been quite small. In fact at first I thought the SAN cache must have been disabled - something I had encountered multiple times when the battery of the cache dies, the cache on the SAN is disabled to avoid corruption in case of power-failure.
Clearly, this DB was doing too much IO for the SAN… way past it's critical point in the hockey stick chart see also Anjo's paper on this  - oh and we couldn't increase any memory settings within the DB of course.


So the only option left, was hunting down individual sql statements and tune them, so the overall IO would decrease.  The impact of sql tuning can be *huge* - especially with a 1000 oltp users. Just imaging a sql that does 5000 physical IO's  that is run by these 1000 users every minute (5x10^6 IO/min) - if we can tune it to use only 20 physical IO's (20x10^3 IO/min)  than that would have a dramatic effect on overall performance.


I noticed that there were a lot of full tablescans … that's not necessarily bad thing, but sometimes it just is ! From the old days, I knew that Oracle 8i's CBO (with default settings) is quite fond of full tablescans and parallel query. I had a lot of migrations from 8.0 to 8i were performance went bad just because of that. Now we're not interested in PQ in an OLTP, so we disabled it.
Then there's Oracle 8i favor for full tablescans. The CBO didn't have any CPU costing back then, so one of the things to do was to set optimizer_index_caching and optimizer_index_cost_adj. There have been a lot of discussing of these parameters in the past, but my experience (in 8i and on OLTP db) is that about 95% of all queries benefit and 5% of the queries will perform worse. The main problem with lowering optimizer_index_cost_adj are rounding errors in the CBO's calculation costs. This can lead to the CBO picking an index based on it's alphabetical order.
Simply explained - let's assume the cost of using index A is 9 and the cost of using index B is 5. If optimizer_index_cost_adj is set to  10, then the cost is divided by 10. So the cost of index A will be 0.9 and index B will be 0.5. Or when rounding kicks in, the cost of both A & B will be 1. And the optimizer will choose the index based on the alphabetical order of it's name. (disclaimer - simple explanation - read Jonathan Lewis' book on Oracle CBO )

<!--break-->

As expected - disabling PQ and setting the optimizer_index_x parameters - dramatically improved performance. So now I could focus on individual statements still going slow. And then I hit what I call bad database design.
Some tables were using the CHAR datatype … nothing wrong with that if you like wasting space. But the CHAR datatype was also being used for storing NUMBERs !!! The CHAR columns were indexed but that doesn't help much using where clauses like 'where <char column>=number'.

Let's setup a small testcase :

SQL> create table duh ( n number, c char(30) );
Table created.
SQL> create index i_duh_n on duh(n) ;
Index created.
SQL> create index i_duh_c on duh(c) ;
Index created.
SQL> insert into duh (n,c) select obj#,obj# from sys.obj$ ;
12201 rows created.
SQL> commit ;
Commit complete.

Querying the number column N will result in an index range scan :

SQL> select * from duh where n=2 ;

        N C
--------- ------------------------------
2 2

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=2 Bytes=76)
1    0   TABLE ACCESS (BY INDEX ROWID) OF 'DUH' (Cost=2 Card=2 Bytes=76)
2    1     INDEX (RANGE SCAN) OF 'I_DUH_N' (NON-UNIQUE) (Cost=1 Car d=2)

No problem here - but when we query the CHAR column C as shown below, the optimizer will perform an implicit cast, and thus no index can be used.

SQL> select * from duh where c=2 ;

        N C
--------- ------------------------------
2 2

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=CHOOSE (Cost=44 Card=123 Bytes=4674)
1    0   TABLE ACCESS (FULL) OF 'DUH' (Cost=44 Card=123 Bytes=4674)

All problematic queries were like the ones above. Sometimes you saw a little variation like this :

SQL> select * from duh where trim(c)='2' ;
N C
--------- ------------------------------
2 2

Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=CHOOSE (Cost=44 Card=123 Bytes=46 74)
1    0   TABLE ACCESS (FULL) OF 'DUH' (Cost=44 Card=123 Bytes=4674)

This means the developers knew something was fishy about using CHAR as a NUMBER ;-)
Anyway - there's a quick fix for this - and it's called function based indexes (FBI). They were introduced in Oracle 8i, so, two thumbs up, I could use them. Well, I could use them after setting QUERY_REWRITE_ENABLED=TRUE and QUERY_REWRITE_INTEGRITY=TRUSTED
But I had already done this as I had seen a lot of 'WHERE UPPER(x)=UPPER(y)' type of where clauses - a developer's favorite performance killer !
So once I identified all the queries that were full table scanning these 'CHAR as NUMBER' tables, I could do the following with it :

SQL> create index i_duh_c_fbi on duh(to_number(c)) ;
Index created.
SQL> select * from duh where c=2 ;

        N C
--------- ------------------------------
2 2


Execution Plan
----------------------------------------------------------
0      SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=2 Bytes=76)
1    0   TABLE ACCESS (BY INDEX ROWID) OF 'DUH' (Cost=2 Card=2 Bytes=76)
2    1     INDEX (RANGE SCAN) OF 'I_DUH_C_FBI' (NON-UNIQUE) (Cost=1      Card=2)

BONUS! If you know that those tables were around 60 to 80Mb each and those queries were executed quite frequently, it doesn't take a genius to see what that does for IO and performance in general. The SAN could now cope with the number of IO's thrown to it. (see the hockey stick model)

There was, however, one other thing. These tables were also queried from remote databases, mainly one XE database running the web part of the application (APEX).
After the FBI's were implemented, some parts of the web app broke down. Looking at the user dumps, we found out that the remote server process crashed each time a table (or synonym) with an fbi on it was queried. Basically we had an ORA-07445 on the XE database session each time a remote query was launched on the fbi tables :

ksedmp: internal or fatal error
ORA-07445: exception encountered: core dump [ACCESS_VIOLATION] [_ddfnetCFull+1379] [PC:0xFD1F13] [ADDR:0x8A10068] [UNABLE_TO_READ] []

A brief search on Metalink clarified that this has something todo with 'hidden columns'.
Now ... I could only reproduce the ORA-07445 when the execution plan on the remote DB used the function based index.

So what happens if you create a function based index ? IS this really a kind of index or did the oracle kernel developers added some special trickery to the table ? Or does a function based index add a column to the table behind your back, materializing the result of the function, and indexes that ?

Let's look at the columns of table DUH without function based indexes :
SQL> select name,col#,intcol#,segcol#,property from sys.col$ where obj# = ( select obj# from sys.obj$ where name = upper('DUH') and type# = 2);

NAME                                COL#   INTCOL#   SEGCOL#  PROPERTY
------------------------------ --------- --------- --------- ---------
N                                      1         1         1         0
C                                      2         2         2         0

And let's dump a datablock of DUH (without fbi) to trace :

buffer tsn: 1 rdba: 0x00800821 (2/2081)
scn: 0x0000.00030bad seq: 0x01 flg: 0x02 tail: 0x0bad0601
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data

Block header dump:  0x00800821
Object id on Block? Y
seg/obj: 0x2fce  csc: 0x00.30b73  itc: 1  flg: -  typ: 1 - DATA
fsl: 0  fnx: 0x0 ver: 0x01

Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   xid:  0x0009.004.0000001f    uba: 0x00c025af.0010.0a  --U-   45  fsc 0x0000.00030bad

data_block_dump
===============
tsiz: 0x7b8
hsiz: 0x6c
pbl: 0x0372a044
bdba: 0x00800821
flag=-----------
ntab=1
nrow=45
frre=-1
fsbo=0x6c
fseo=0x137
avsp=0xcb
tosp=0xcb
0xe:pti[0] nrow=45 offs=0
0x12:pri[0] offs=0x137
0x14:pri[1] offs=0x15c
0x16:pri[2] offs=0x181
...
0x6a:pri[44] offs=0x793
block_row_dump:
tab 0, row 0, @0x137
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 02
col  1: [30]
31 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
tab 0, row 1, @0x15c
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 30
col  1: [30]
34 37 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
tab 0, row 2, @0x181
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 14
col  1: [30]
31 39 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
...
tab 0, row 44, @0x793
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 35
col  1: [30]
35 32 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
end_of_block_dump

So - nothing exciting here !

Now lets do the same after we've created a function based index on 'to_number(c)' :
SQL> select name,col#,intcol#,segcol#,property from sys.col$ where obj# = ( select obj# from sys.obj$ where name = upper('DUH') and type# = 2);

NAME                                COL#   INTCOL#   SEGCOL#  PROPERTY
------------------------------ --------- --------- --------- ---------
N                                      1         1         1         0
C                                      2         2         2         0
SYS_NC00003$                           0         3         0    327976

Hmm - a system generated column !
Property 327976 or 0x50128=0x40000+0x10000+0x00100+0x00020+0x00008
means
0x40000 = virtual column is NLS dependent
0x10000 = virtual column
0x00100 = system-generated column
0x00020 = hidden column
0x00008 = virtual column

So we have a hidden, system generated, virtual column.
COL# is zero - this is consistent with the column being hidden.
And SEGCOL# is also zero - which tells us that the column is not materialized within the table's blocks and that's consistent with the column being virtual.

You can the property values from sql.bsq in $OH/rdbms/admin :
property      number not null,           /* column properties (bit flags): */
/* 0x0001 =       1 = ADT attribute column                   */
/* 0x0002 =       2 = OID column                             */
/* 0x0004 =       4 = nested table column                    */
/* 0x0008 =       8 = virtual column                         */
/* 0x0010 =      16 = nested table's SETID$ column           */
/* 0x0020 =      32 = hidden column                          */
/* 0x0040 =      64 = primary-key based OID column           */
/* 0x0080 =     128 = column is stored in a lob              */
/* 0x0100 =     256 = system-generated column                */
/* 0x0200 =     512 = rowinfo column of typed table/view     */
/* 0x0400 =    1024 = nested table columns setid             */
/* 0x0800 =    2048 = column not insertable                  */
/* 0x1000 =    4096 = column not updatable                   */
/* 0x2000 =    8192 = column not deletable                   */
/* 0x4000 =   16384 = dropped column                         */
/* 0x8000 =   32768 = unused column - data still in row      */
/* 0x00010000 =   65536 = virtual column                         */
/* 0x00020000 =  131072 = place DESCEND operator on top          */
/* 0x00040000 =  262144 = virtual column is NLS dependent        */
/* 0x00080000 =  524288 = ref column (present as oid col)        */
/* 0x00100000 = 1048576 = hidden snapshot base table column      */
/* 0x00200000 = 2097152 = attribute column of a user-defined ref */
/* 0x00400000 = 4194304 = export hidden column,RLS on hidden col */
/* 0x00800000 = 8388608 = string column measured in characters   */
/* 0x01000000 = 16777216 = virtual column expression specified    */
/* 0x02000000 = 33554432 = typeid column                          */
/* 0x04000000 = 67108864 = Column is encrypted                    */
/* 0x20000000 = 536870912 = Column is encrypted without salt       */
         
Let's double check the column hasn't materialized by dumping a block.

buffer tsn: 1 rdba: 0x00800821 (2/2081)
scn: 0x0000.00030bad seq: 0x01 flg: 0x02 tail: 0x0bad0601
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data

Block header dump:  0x00800821
Object id on Block? Y
seg/obj: 0x2fce  csc: 0x00.30b73  itc: 1  flg: -  typ: 1 - DATA
fsl: 0  fnx: 0x0 ver: 0x01

Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   xid:  0x0009.004.0000001f    uba: 0x00c025af.0010.0a  --U-   45  fsc 0x0000.00030bad

data_block_dump
===============
tsiz: 0x7b8
hsiz: 0x6c
pbl: 0x0372a044
bdba: 0x00800821
flag=-----------
ntab=1
nrow=45
frre=-1
fsbo=0x6c
fseo=0x137
avsp=0xcb
tosp=0xcb
0xe:pti[0] nrow=45 offs=0
0x12:pri[0] offs=0x137
0x14:pri[1] offs=0x15c
0x16:pri[2] offs=0x181
...
0x6a:pri[44] offs=0x793
block_row_dump:
tab 0, row 0, @0x137
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 02
col  1: [30]
31 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
tab 0, row 1, @0x15c
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 30
col  1: [30]
34 37 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
tab 0, row 2, @0x181
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 14
col  1: [30]
31 39 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
...
tab 0, row 44, @0x793
tl: 37 fb: --H-FL-- lb: 0x1 cc: 2
col  0: [ 2]  c1 35
col  1: [30]
35 32 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
20 20 20 20 20
end_of_block_dump

Nope - the table still has 2 physical columns. Just what we expected.

Let's dump a leaf block from the function based index... here we see the column is materialized.

buffer tsn: 1 rdba: 0x00800944 (2/2372)
scn: 0x0000.00030bc3 seq: 0x01 flg: 0x00 tail: 0x0bc30601
frmt: 0x02 chkval: 0x0000 type: 0x06=trans data

Block header dump:  0x00800944
Object id on Block? Y
seg/obj: 0x2fcf  csc: 0x00.30bbe  itc: 2  flg: -  typ: 2 - INDEX
fsl: 0  fnx: 0x0 ver: 0x01

Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   xid:  0x0000.000.00000000    uba: 0x00000000.0000.00  ----    0  fsc 0x0000.00000000
0x02   xid:  0x000b.00c.00000015    uba: 0x00000000.0000.00  ----    0  fsc 0x0000.00000000

Leaf block dump
===============
header address 57843804=0x372a05c
kdxcolev 0
kdxcolok 0
kdxcoopc 0x80: opcode=0: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 0
kdxconro 117
kdxcofbo 270=0x10e
kdxcofeo 472=0x1d8
kdxcoavs 202
kdxlespl 0
kdxlende 0
kdxlenxt 8390981=0x800945
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 1892
row#0[1880] flag: -----, lock: 0
col 0; len 2; (2):  c1 02
col 1; len 6; (6):  00 80 08 21 00 00
row#1[1868] flag: -----, lock: 0
col 0; len 2; (2):  c1 03
col 1; len 6; (6):  00 80 08 21 00 19
...
row#116[472] flag: -----, lock: 0
col 0; len 3; (3):  c2 02 11
col 1; len 6; (6):  00 80 08 23 00 1a
----- end of leaf block dump -----

So a function based index is just a normal index on a virtual hidden column Wink

Now all this sounds very familiar with ... Oracle 11g virtual columns !

Let's create a table in 11g with a virtual column ... just for fun.
create table duh11g ( n number, c char(30), v as (to_number(c)) ) ;

select name,col#,intcol#,segcol#,property from sys.col$ where obj# = ( select obj# from sys.obj$ where name = upper('DUH11G') and type# = 2)
SQL> /

NAME                                 COL#    INTCOL#    SEGCOL#   PROPERTY
------------------------------ ---------- ---------- ---------- ----------
N                                       1          1          1          0
C                                       2          2          2          0
V                                       3          3          0     327688

Property 327688 = 0x50008=0x40000+0x10000+0x00008
0x40000 = virtual column is NLS dependent
0x10000 = virtual column
0x00008 = virtual column

COL# is non-zero - so it is visable. Clearly it's not system generated, but it isn't materialised because SEGCOL# is zero.
Now for fun - let's create an index on the virtual column V.

SQL> create index i_duh11g_v on duh11g( v ) ;

Index created.

And now dump a leafblock of the index :

Block header dump:  0x004159db
Object id on Block? Y
seg/obj: 0x11587  csc: 0x00.ba00b  itc: 2  flg: -  typ: 2 - INDEX
fsl: 0  fnx: 0x0 ver: 0x01

 Itl           Xid                  Uba         Flag  Lck        Scn/Fsc
0x01   0x0006.001.000002ed  0x00c0028d.01f3.01  -B--    1  fsc 0x0000.00000000
0x02   0x0009.002.000002b7  0x00c0168b.0216.04  ----  540  fsc 0x0000.00000000
Leaf block dump
===============
header address 682344540=0x28abc05c
kdxcolev 0
KDXCOLEV Flags = - - -
kdxcolok 1
kdxcoopc 0x87: opcode=7: iot flags=--- is converted=Y
kdxconco 2
kdxcosdc 1
kdxconro 540
kdxcofbo 1116=0x45c
kdxcofeo 1116=0x45c
kdxcoavs 0
kdxlespl 0
kdxlende 0
kdxlenxt 4282844=0x4159dc
kdxleprv 0=0x0
kdxledsz 0
kdxlebksz 8032
row#0[1116] flag: ----S-, lock: 2, len=12
col 0; len 2; (2):  c1 02
col 1; len 6; (6):  00 41 59 c2 00 00
row#1[1128] flag: ----S-, lock: 2, len=12
col 0; len 2; (2):  c1 03
col 1; len 6; (6):  00 41 59 c2 00 01
...
row#539[8019] flag: ----S-, lock: 2, len=13
col 0; len 3; (3):  c2 06 29
col 1; len 6; (6):  00 41 59 c4 00 ae
----- end of leaf block dump -----


What we got is that a normal index on a virtual column in 11g is the same as a function based index in 8i.
And if you use an fbi in 8i you have actually created a hidden virtual column. It doesn't add a physical column to your table, which of course would be very io intensive and could have let to row migration (and possible chaining).
So to get back to the bug - the work around is to created a view on the table in the remote database and then query the view as opposed to querying the actual table or synonym.
This bug is also fixed in 9.2.0.6 and 10g.... but then again ... we had to work with 8.1.7.4 Wink

 

AXI at Ostend new year's plunge

It's been freezing here for the last two weeks … the hardest and longest since 1993 (where are those green guys now, telling me about cutting my engine power because of global warming ?).

Anyway - the company I work for - AXI, an Oracle partner in Belgium and The Netherlands - decided to take part in the new year's plunge in Ostend.
Yes - you read it right - not a plunge in the sea at sunny Cape Town,  or Sidney Australia - but in freezing Ostend.

Of course this is not healthy at all - many doctors warned about it.
"The freezing cold is bad for heart and blood vessels. A big change in pressure or big changes in temperature can result in sudden heart failure, while it is dangerous for the lungs too."

Once that hit the news - and once a similar event was canceled in Holland - I knew the Belgian event would hit a larger crowd than ever, because idiots as we are, we want to prove everybody wrong. And 6500 idiots actually went into the sea.

Me and the wife, both tactical thinkers, decided it was more opportune to provide for the catering and marketing material. And as news crews would be on site, we decided to bring along a 2 by 3 meters ( or that's 6.5 by 10 feet ) flag. We had a small logistical problem as we needed to find poles to keep it stretched and … not entirely unimportant, could fit in our car. So we ended up buying some retractable poles - originally intended for fishing nets - from a fishing shop Laughing
I must say - the company name was extremely visible even in between a crowd of more than 6000 Wink
Our first mission was accomplished.

It is an unwritten rule that if you do something crazy and irresponsive as diving into 3°C sea water at freezing temperature, that you should drink some alcohol afterwards to … warm you up. It's the logical thing to do (if you're going for a heart attack, you should go all the way) And the weapon of choice around this time of the year is … jenever

Because me and the wife were providing for the catering, I was dragging along a backpack with 8 bottles of jenever… this meant I had … a god-like status - I was Bacchus himself Cool


Anyway - here are some snapshots

AXI group photo

Marketing at it's finest

AXI FLAG

The smartest guy of the group - Bacchus, god of jenever and marketing materials - all warm and fully clothed !

kurt van meerbeeck

Hunting down partitioned IOT objectid's

I'm so glad you can't sub-partition an indexed organized table (IOT).  (no we do not want this feature in 12g)

I was looking into my code for unloading partitioned IOT's.
Unloading IOT's (with or without overflow segments) is easy. It's keeping track of all the meta-data that's starting to get difficult.

DUDE allows you to unload tables as follows :

  • Table by table based on owner and tablename or objectid
  • Shema by schema - so based on schema/owner name
  • Tablespace by tablespace - and this is based on the data objectid of the table

You basically can define three different types of object id's

  • There's the objectid itself, which defines the object uniquely in most if not all base dictionary table
  • Then there's the data objectid (or to use an expensive word 'data layer objectid' - which is the objectid stored within the block itself. Mostly it equals objectid, but if you for example move or truncate a table it'll change. A data objectid can be NULL - for example the table part of a partitioned table does not have a physical segment - it's the partitions that actually contain the data
  • And then there's the base objectid - which is the objectid of the base table of an object … or a pointer to another object to put it simple.

So - if you're unloading a complete tablespace, you are going through the data objectid's found in the tablespace. And using those physical objectid's you have to move your way up to the table objectid using the metadata in the base dictionary tables.

Why ? Because you need the table definition (name, columns, datatypes etc). This is quite simple tho.

But wait, sometimes it can get messy. Let's imaging a partitioned IOT using overflow segments with <n> partitions.
What you get is the following meta data spread around several base dictionary tables :

  • 1 table   :  objectid  is <t>,  data objectid is null, base objectid is <o>
  • <n> table partitions : objectid is <t+n>, data objectid is null, base objectid is <t>
  • 1 index   : objectid is <i>, data objectid is null, base objectid is <t>
  • <n> index partitions  : objectid is <i+1>, data objectid is <i+1> initially, base objectid is <i>
  • 1 table (overflow) : objectid is <o>, data objectid is null, base objectid is <t>
  • <n> table partitions (overflow) : objectid is <o+1>, data objectid is <o+1> initially, base objectid is <o>

As you can see - the table is linked to the overflow table and vice versa using a base objectid.
The table partitions are linked to the table, the index is connected to the table, the index partitions are hooked up to the index, and the overflow table partitions are linked to the overflow table… all using the base objectid's. And only two types of objects have actual physical segments - the index partitions and the overflow table partitions.  So if DUDE hits a data objectid of an IOT index partition it can traverse up to the table object and get the definitions.

Pfew … the whole thing made me think about the Front 242 song 'Headhunter' - '1 - you lock the target - 2 - you bait the line - 3 - you slowly spread the net - 4 - you catch the … euh objectid ?'

I'm sure there are a couple of Oracle nerds out there that can appreciate this clip for 1988 !

 




Howto's
See DUDE primer for info

Get Support

Europe

Belgium :
Kurt Van Meerbeeck
ORA600 bvba
E-mail
dude@ora600.be
Cell : +32 495 580714

Denmark :
Michael Môller
Service & Support Manager
Miracle AS
E-mail :
hra@miracleas.dk
Cell: +45 53 74 71 27


North America

USA :
Tim Gorman
Evdbt Inc
E-mail
tim@evdbt.com
Cell : +1 303 885 4526

Canada :
Pythian
E-mail
dude@pythian.com
Contact


Latin America

Brazil :
HBtec
E-mail
dude@hbtec.com.br
Cell : +55 47 88497639
Contact


Africa

South Africa :
Kugendran Naidoo
NRG Consulting
E-mail
k@nrgc.co.za
Cell : +27 82 7799275


East Asia Pacific

Australia
Andre Araujo
Pythian Australia
E-mail
dude@pythian.com
Cell : +61 2 9191 7427 ext. 1270