I haven’t made any improvements lately on Phat. I mostly do this when I’m working on performance problems, I add a few new features here and there which come in handy right away. But I did not have to solve performance problems the last few months. Last week I found one problem which started as a little performance issue, but it turned out to be something completely else.
Complaints were that a batch was not performing well, and one thing led to another. Queries which were done were of the most simple form:
- select <columns> from <table> where unique_id = <value>
- delete from <table> where unique_id = <value>
Explain plan showed that a full table scan was performed while a unique index existed on the one column which was used as search predicate. Investigation on the size of table and index showed that an index should be used, reading a few block in an index and one block from the table was much more efficient than reading the entire 20+ MB table. So, just to make sure:
- Index rebuild
- Refrefh statistics
No change. Putting an index hint on the queries showed two strange things:
- Cost was lower, why did the optimizer not choose for the index plan ?
- Query used an ‘index full scan’, hello ? It’s a one column index, and were are not doing a count or so.
I decided to make a copy of the table in another schema, and I created a similar index on it. It gave the same behaviour, this did not give much clues. I tried a 10053 trace, despite the fact that I’m not very familiar with it yet. It showed all used variables which the optimizer used, and quite a short analysis of possible queries. It looked like it only calculated one plan, this was strange.
Next experiment I thought of got this problem in a different direction. I decided to export the table to my own test database. Running the export gave this error:
Connected to: Oracle9i Enterprise Edition Release 184.108.40.206.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 220.127.116.11.0 - Production
Export done in AL32UTF8 character set and UTF8 NCHAR character set
About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting PUBLIC type synonyms
EXP-00008: ORACLE error 6552 encountered
ORA-06552: PL/SQL: Compilation unit analysis terminated
ORA-06553: PLS-553: character set name is not recognized
EXP-00000: Export terminated unsuccessfully
This looked strange, I checked out invalid objects immediatly. I found 1000+ invalid SYS objects. I tried to fix it:
- utlrp.sql left dozens of invalid KU$ views
- catalog.sql, catproc.sql, utlrp.sql still left them invalid
- compiling an individual KU$ view gave ORA-06552, with a message like ‘type CHAR should be CHAR’
- select on dba_registry showed that catalog was invalid
What has happened ? A ‘grep ORA-‘ on the alert log showed the cause: a ‘CREATE DATABASE’ was done on this instance, the environment must have been set wrong when creating a new database. It has run on this existing database, and must have corrupted the catalog. Strange that a rerun of ‘catalog.sql’ did not fix the problem.
Anyway, a restore of the database was decided. After this the queries used the good old ‘index range scan’, and the exp worked again.