Remember your dreams

Remember your dreams
Remember your dreams

Wednesday, December 7, 2011

Unlock locked statistics

ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at “SYS.DBMS_STATS”, line 23154
ORA-06512: at “SYS.DBMS_STATS”, line 23205…
 
To unlock lock statistics for a table execute the following procedure:

exec DBMS_STATS.UNLOCK_TABLE_STATS('OWNER', 'TABLE_NAME');

Read my post about how to prevent statistics from becoming locked after expdp impdp

Preventing locked Statistics after expdp impdp

Cannot gather table statistics after data pump import impdp


exec dbms_stats.gather_table_stats(ownname=>'USER1',tabname=>'EMP');

ORA-20005: object statistics are locked (stattype = ALL)
ORA-06512: at “SYS.DBMS_STATS”, line 23154
ORA-06512: at “SYS.DBMS_STATS”, line 23205…

To unlock table Statistics see the following post:
Unlocking Table Statistics

I ran into this issue when I was exporting a schema from one database to import to another database.
I was not importing the rows only the objects and noticed the statistics came over. When trying to gather new statistic I received an error saying that the Statistics were locked. To prevent the statistic from being locked after importing I used the EXCLUDE STATISTICS parameter in the export.

This way the statistics are ignored and will not be locked on the new database.


In this example I want the entire schema with the exception of certain tables. I do not want Grants, rows, or statistics. Use the EXCLUDE STATISTICS parameter to prevent your statistics from being locked once you complete the import.

expdp_user1_tables.par
 
USERID=
DUMPFILE=user1_tables
LOGFILE=user1_tables
SCHEMAS=USER1
EXCLUDE=TABLE:"IN ('CONTRACT_TYPES','TIME_CODES','LAB_BOOK_ACCESS')",GRANT, TABLE_DATA
EXCLUDE=STATISTICS

Run the expdp to export the schema.
The following on one line:
dpdir> expdp parfile=expdp_user1_tables.par DIRECTORY=DATA_PUMP_DIR

ftp the dump file,  user1_tables.dmp, to the new database server.

impdp_user1_tables.par
USERID=
LOGFILE=imp_user1_tables
DUMPFILE=user1_tables

 dpdir> impdp parfile=imp_user1_tables.par DIRECTORY=DATA_PUMP_DIR

For more information on using Data Pump see the following post:



Wednesday, February 16, 2011

Scheduler and data pump expdb

Oracle Enterprise Edition 11.2.0.2
Linux x86_64, RH 5

I would like to routinely export some or all of my database as part of my DR strategy. I have always used crontab to call my expdp script on a routine basis. Now I want to change that from using crontab to using Oracle Scheduler.

First I will review the shell script and parfile I use. Then I will use Oracle Scheduler to schedule the execution of this shell script.

This is my full export but with subtle modifications I also export specific schemas on a more frequent basis. I am using csh for this shell script.

This is expdp_full.sh

#!/bin/csh
# set the environment
setenv ORACLE_HOME /app/oracle/product/11.2.0.2
setenv ORACLE_SID grims
setenv ORACLE_BASE /app/oracle

setenv LOGFILE expdp_full.log
setenv DUMPFILE expdp_full.dmp
setenv DMPDIR /scripts/expdp

# expdp does not like files it wants
# to create lying around so I will clean
# any up just in case.

# check for existing logfile and
# remove if found.
if ( -e ${DMPDIR}/${LOGFILE} ) then
rm ${DMPDIR}/${LOGFILE}
endif

# check for existing dump file and
# remove if found.
if ( -e ${DMPDIR}/${DUMPFILE} ) then
rm ${DMPDIR}/${DUMPFILE}
endif

# change to the working directory
chdir ${DMPDIR}

# call expdp and use the full path to it.
/app/oracle/product/11.2.0.2/bin/expdp parfile=expdp_full.par DUMPFILE=${DUMPFILE} LOGFILE=${LOGFILE} DIRECTORY=DMPDIR

# once the expdp is complete zip it to save space.
# You can use the compress command in UNIX.
zip ${DUMPFILE}.`date +%m%d`.Z ${DUMPFILE}

# Get the log status for email.
set STATUS=`grep Job ${DMPDIR}/${LOGFILE}`

# mail results.

mailx -s "${ORACLE_SID} on `uname -n` FULL export '$STATUS'" email@company.com < ${DMPDIR}/${LOGFILE}

# delete the existing dump file. Zip does not remove it.
if ( -e ${DMPDIR}/${DUMPFILE} ) then
rm ${DMPDIR}/${DUMPFILE}
endif

# rename the log file to preserve it.
if ( -e ${DMPDIR}/${LOGFILE} ) then
mv ${DMPDIR}/${LOGFILE} ${DMPDIR}/${LOGFILE}.`date +%m%d`
endif
exit


The par files. This is both the full expdp and another with specific schemas.
expdp_full.par
USERID="/ as sysdba"
FULL=Y

expdp_users.par
USERID="/ as sysdba"
SCHEMAS=SCOTT,HR,APPS

Of course you can stop here and use crontab to execute expdp_full.sh, but I am going to now use the Oracle Scheduler.

First I will create the scheduled job to run every morning at 2:05am. Once it runs successfully for a few days and I like the out put I will update the schedule to run this task once a week.

From sqlplus enter the new scheduled job.

BEGIN
dbms_scheduler.CREATE_JOB (
job_name => 'expdp_full',
job_type => 'EXECUTABLE',
job_action => '/oracle/scripts/expdp/expdp_full.sh',
start_date => to_date('02/14/2011 14:00:00','mm/dd/yyyy hh24:mi:ss'),
repeat_interval => 'FREQ=DAILY;BYHOUR=2;BYMINUTE=5;BYSECOND=0',
end_date => null,
enabled => TRUE,
comments => 'expdp full database');
END;

To update the frequency interval to once a week, Friday morning at 2:05am:

BEGIN
sys.dbms_scheduler.disable( '"SYS"."EXPDP_FULL"' );
sys.dbms_scheduler.set_attribute( name => '"SYS"."EXPDP_FULL"', attribute => 'repeat_interval', value => 'FREQ=WEEKLY;BYDAY=FRI;BYHOUR=2;BYMINUTE=5;BYSECOND=0');
sys.dbms_scheduler.enable( '"SYS"."EXPDP_FULL"' );
END;

Here are some useful queries to examine the scheduler jobs.

SQL> SELECT JOB_NAME, STATE FROM DBA_SCHEDULER_JOBS where job_name = 'EXPDP_FULL';
JOB_NAME STATE
------------------------------ ---------------
EXPDP_FULL SCHEDULED

SQL> SELECT * FROM ALL_SCHEDULER_RUNNING_JOBS;
no rows selected

The following will show the history of a specific job and whether it succeeded or not. Just because the job succeeded does not mean the script ran without errors. The next query is an example where this first query said SUCCEEDED yet the script failed to run.

SELECT to_char(log_date, 'DD-MON-YY HH24:MM:SS') TIMESTAMP, job_name,
job_class, operation, status FROM USER_SCHEDULER_JOB_LOG
WHERE job_name = 'EXPDP_FULL' ORDER BY log_date;

Show more detail about an job history. This is the output before I gave the full path in my shell script to extdp.

SELECT to_char(log_date, 'DD-MON-YY HH24:MM:SS') TIMESTAMP, job_name, status,
SUBSTR(additional_info, 1, 40) ADDITIONAL_INFO
FROM user_scheduler_job_run_details
WHERE job_name = 'EXPDP_FULL'
ORDER BY log_date;

14-FEB-11 14:02:02 EXPDP_FULL SUCCEEDED STANDARD_ERROR="expdp: Command not found

Proactive Oracle DBA

This is a series of posts. I am working this to share some of the many scripts I schedule to automatically run to alert me of any current o...