Before start using Phat to do some analysis, it’s important to be aware of a few things.
Running the Phat queries on top of the Phat views and Perfstat tables isn’t a lightweight operation. We are not talking about a few OLTP queries here, we are allmost doing heavy warehousing on a schema ! Most queries use full table scans, on tables that might be large. So be carefull doing your analysis on a live production database. If you are not sure (yet) about what you are doing, consider the following things:
- Do testing of Phat on a test database to get to know what impact it has before starting your analysis on a production database.
- Do an export & import every night from the production database to a test database, so there’s a safe place for you to do serious querying.
- Do your analysis after workhours when nobody is bothered by the load you will put on the database.
- Keep your PERFSTAT schema small. This means that you must have some cleaning mechanism in place, which removes the oldest PERFSTAT snapshots. A 2 or 3 week timespan available in PERFSTAT might do the job. And keep in mind when cleaning: a lot of rows are removed with a CASCADE DELETE doing a DELETE on STATS$SNAPSHOT, but tables like STATS$SQLTEXT, STATS$SQL_PLAN, STATS$UNDOSTAT and some others never get cleaned. Take a look at this site for more information and a cleaning procedure. Maybe a complete reinstall of PERFSTAT might also do the nasty trick, but what about your precious performance data ?
- Make sure you have some recent statistics present for your PERFSTAT schema. In some database only a GATHER_TABLE_STATS is done on the application schemas, but PERFSTAT is forgotten. Absence of statistics on the PERFSTAT schema has a big impact on the Phat queries.
Another important note is about STATS$SQL_SUMMARY. This PERFSTAT table gets filled with rows from V$SQL. This last view is just a presentation of cache content. And you know the thing with cache: things get thrown out when space is needed for a new item. This behaviour might especially occur when you are analyzing a system where bind variables are not used very much, or when you are facing a small shared pool. So: the content of Phat views based on STATS$SQL_SUMMARY might not be complete. They are just as incomplete as if you were generating a complete Statspack report, and looking at the Sql lists. Options to prevent this:
- Higher frequency of taking Perfstat snapshots, accompanied the performance penalty.
- Buy the true big bucks direct memory performance tools. These can do multiple samples within a second, so all information will be recorded.
There are a few steps to take to get Phat up and running.
You have read information on this site very well, so you know Phat is based on Statspack. So make sure the schema PERFSTAT is installed and complete. The 10g manual seems not to be generous with information, maybe because Oracle is pushing AWR functionality, and the good old PERFSTAT does not benefit Oracle. So check out the old 9i manual for more information: http://download-east.oracle.com/docs/cd/B10501_01/server.920/a96533/statspac.htm#21605.
Create a job of some kind to collect the Statspack data. DBMS_JOB might do the trick:
- Make sure JOB_QUEUE_PROCESSES is 1 or higher.
- GRANT CREATE JOB TO PERFSTAT;
- Run with PERFSTAT:
( job => X
,what => ‘statspack.snap;’
,next_date => TRUNC(SYSDATE+1/24,’HH’)
,INTERVAL => ‘TRUNC(SYSDATE+1/24,”HH”)’
,no_parse => TRUE
,INSTANCE => 1
,FORCE => TRUE
SYS.DBMS_OUTPUT.PUT_LINE(‘Job Number is: ‘ || TO_CHAR(x));
Phat views creation
You have downloaded the latest version of Phat, now it’s time to create the Phat views. You have to make a choice of one of there create scipts:
You have to use the one that suits your database version. It can be run in SqlPlus, Toad or maybe some other tools. It’s important that the used tools interprets the SqlPlus DEFINE and ACCEPT commands correctly.
Startup the script when logged on as SYS. When you’ve done so the script asks to fill in 2 things, the name of your PERFSTAT schema and the timespan you want to investigate using Phat. Let’s explain why there two parameters are asked:
- Owner: This might be handy when you use another schema name when doing an export & import of the PERFSTAT schema from a production to a test database.
- Days max window: you might discover an existing PERFSTAT schema where you want to test out Phat. The Phat queries might become very slow if this schema has too many snapshots. Purging can be a nasty task, and you want to get started right away, so setting the time window of your Phat investigation to 14 days or so might do the trick.
Note: up to 10R1 a table STATS$IDLE_EVENT is created in the PERFSTAT schema, this can be used to filter out the idle events when using stats$system_event. This table is not present anymore in 10R2, in stead WAIT_CLASS=’Idle’ can be used on v$event_name to get a list of idle events. So in the create10g.sql a view STATS$IDLE_EVENT is created on the v$event_name. The creation of this view will fail with ORA-00955 when used on 10R1. This is no problem, Phat will work without trouble, and all other views should be compiled without trouble.
But I’m starting to get a feeling seprate create10R1.sql and create10R2.sql are needed later on. There might be too many differences between the 10R1 and 10R2 PERSTAT schema.
Your first Phat result
Ok, we’ve got things set. Now it’s time too see some first result. Let’s use the general_analys.sql sheet, it’s filled with queries ready to use. I”ve loaded it in Toad, and I’ve run the first query:
I have to save the data to the clipboard: Grid -> Save As
Paste in Excel, and select 3 columns: snap_time, cpu_s and wait_s.
Make a graph of it all:
. . . and Phat has revealed some information out of your PERFSTAT schema you have not seen before ! The graph above shows a few numbers from over 400 snapshots !