Remember your dreams

Remember your dreams
Remember your dreams

Friday, June 4, 2010

Data Pump. Moving tables to new schema, new tablespace.

I want to move a series of tables from one schema to another. At the same time I want to move them to a new tablespace. Data Pump is great for doing this.

The following data pump parameters allow for this. You can remap more then one tablespace at a time.

First export the tables.

Export parfile example (expdp_joe_tables.par):
USERID='/ as sysdba'
TABLES=(JOE.TABLE1, JOE.TABLE2, JOE.TABLE3)
DUMPFILE=joe_tables
LOGFILE=expdp_joe_tables

-- Run the expdp command
#expdp parfile=expdp_joe_tables.par

Import parfile example (impdp_joe_tables.par):
USERID='/ as sysdba'
TABLES=(JOE.TABLE1, JOE.TABLE2, JOE.TABLE3)
REMAP_SCHEMA=JOE:SAM
REMAP_TABLESPACE=USERJOE1:USERSAM1,USERJOE2:USERSAM2
DUMPFILE=joe_tables
LOGFILE=impdp_joe_tables

Import the tables to a new schema and tablespace.

-- Run the impdp command
#impdp parfile=impdp_joe_tables.par

Tuesday, April 13, 2010

Using DMBS_JOB in a trigger

How to call DBMS_JOB from within a trigger.


I ran into a case where I needed to perform multiple DML’s from within a trigger. This proved to be unacceptable since to do this made the original insert/update unreasonably slow. The solution was to write a procedure to do the related insert/updates and call the procedure from within the trigger using dbms_job.


My extreme case was where I needed to do 131 insert/updates to an aggregate table for every insert/update to the data table. Using a cursor and looping through to do the inserts was taking over a minute and a half. Using dbms_job to perform the same feat took about a second and a half. The insert/updates on the aggregate table is much faster (does not take the original minute and a half). Calling DBMS_JOB.SUBMIT kicks off multiple simultaneous sessions (inserts/updates).


Example of a Trigger using dbms_job to call a procedure.


CREATE OR REPLACE TRIGGER HR.AR$EMPLOYEES
AFTER INSERT OR UPDATE OF EMP_NAME
ON HR.EMPLOYEES
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
DECLARE
v_id number;
l_job number;

BEGIN
v_id := :NEW.EMP_ID;
dbms_job.submit(l_job,'HR.POPULATION('||v_id||');');
END;
/


Example of a Trigger using a cursor to query and results and call dbms_job based on the results.



CREATE OR REPLACE TRIGGER HR.AUR$EMPLOYEES
AFTER UPDATE OF EMP_NAME
ON HR.EMPLOYEES
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
DECLARE
v_id number;
l_job number;
BEGIN
FOR content_rec IN (
select emp_id
from HR.JOBS
where EMP_NAME = :NEW.EMP_NAME)
LOOP
v_id := content_rec.emp_id;
dbms_job.submit(l_job,'HR.POPULATION('||v_id||');');
END LOOP;

END;


/

Friday, January 15, 2010

Oracle archive log directory fills up.

ORA-00257: archiver error. Connect internal only, until freed.
There are a number of reason you might fill up your archive log directory. If this happens your database will hang until space is freed up. You quickly log into the server and delete a number of archive logs and wa-la, problem resolved. Ah ah... not so fast. If you use RMAN to backup your database you will receive errors when your backup runs because the catalog will be out of sync with your current archive logs on disk.

These are the steps I take:
  1. Delete the archive logs from one or more directories.
  2. Run the crosscheck RMAN command.
  3. Backup your database.
This the RMAN command to run after manually deleting your archive logs.

RMAN>
RMAN> rman target / catalog rman/rman@rman
RMAN>change archivelog all crosscheck;
.
.
.
Crosschecked 598 objects
RMAN>exit

Run hot backup immediately after this.

Wednesday, January 13, 2010

Oracle RMAN Backup to disk.

Oracle 10R2 on Solaris.

I am using this method to back up a development database and delete the archivelogs on disk. Normally all backups are to tape, but on an occasion I need to backup up a development database, and backup to disk is the most effective solution for a quick backup.

# rman target / catalog login/password@rman;
.
.
connected to recovery catalog database

I always crosscheck the archive logs to be sure they are in sync with the catalog.

RMAN> change archivelog all crosscheck;
starting full resync of recovery catalog
full resync complete
.
.
RMAN>

To speed things up I like to use 4 channels for the backup. I set the configuration for the Disk Channels at the command line to not override the current RMAN settings.

RMAN> run {
2> ALLOCATE CHANNEL disk1 DEVICE TYPE DISK
3> FORMAT '/oracle_backups/DEV/rman/backups/DEV_%T_%U';
4> ALLOCATE CHANNEL disk2 DEVICE TYPE DISK
5> FORMAT '/oracle_backups/DEV/rman/backups/DEV_%T_%U';
6> ALLOCATE CHANNEL disk3 DEVICE TYPE DISK
7> FORMAT '/oracle_backups/DEV/rman/backups/DEV_%T_%U';
8> ALLOCATE CHANNEL disk4 DEVICE TYPE DISK
9> FORMAT '/oracle_backups/DEV/rman/backups/DEV_%T_%U';
10> BACKUP DATABASE PLUS ARCHIVELOG DELETE INPUT;
11> }

After the backup has competed I want to clean up prior backups still on disk and in the catalog.
Maintainance commands for crosschecks and deleting expired backups

RMAN> ALLOCATE CHANNEL FOR MAINTENANCE DEVICE TYPE DISK;
RMAN> CROSSCHECK BACKUP;
RMAN> DELETE NOPROMPT EXPIRED BACKUP;
RMAN> DELETE NOPROMPT OBSOLETE DEVICE TYPE DISK;
RMAN> CROSSCHECK ARCHIVELOG ALL;
RMAN> DELETE NOPROMPT EXPIRED ARCHIVELOG ALL;

The last 2 commands should return no output if you have deleted the archivelogs during the backup.

Thursday, October 1, 2009

Veritas NetBackup - querying your media

To see what tapes you currently have in your storage unit, which pool they are assigned to, which ones are full, and how much is used of the others.

From the Master Server, change directories to goodies and run the available media command:

# cd /usr/openv/netbackup/bin/goodies
#./available_media

media media robot robot robot side/ ret size status ID type type # slot face level KBytes
-----------------------------------------------------------------------------------------------

CatalogBackup pool

xxx151 HCART TLD 0 47 - 3 69689341 ACTIVE
xxx152 HCART TLD 0 46 - 3 177652257 ACTIVE

NetBackup pool

xxx030 HCART TLD 0 28 - 6 477197502 ACTIVE/MPX
xxx139 HCART TLD 0 126 - 4 286852847 ACTIVE/MPX
xxx161 HCART TLD 0 59 - 4 58710487 ACTIVE/MPX
xxx195 HCART NONE - - - - - AVAILABLE
xxx007 HCART TLD 0 29 - 4 1439675239 FULL/MPX
xxx072 HCART TLD 0 128 - 4 1188893711 FULL/MPX
xxx134 HCART TLD 0 40 - 4 1287883499 FULL/MPX
xxx136 HCART TLD 0 100 - 4 1119822897 FULL/MPX
xxx137 HCART TLD 0 139 - 4 1365587550 FULL/MPX
xxx138 HCART TLD 0 140 - 4 1244368634 FULL/MPX
xxx144 HCART TLD 0 57 - 4 1101532193 FULL/MPX

Scratch pool

xxx162 HCART TLD 0 60 - - - AVAILABLE
xxx163 HCART TLD 0 109 - - - AVAILABLE
xxx164 HCART TLD 0 102 - - - AVAILABLE
xxx165 HCART TLD 0 111 - - - AVAILABLE
xxx166 HCART TLD 0 4 - - - AVAILABLE

Veritas NetBackup - Insert media failed: EMM database error (196)

Update failed: could not add new media ID 'XYZ166' into slot 4
Insert media failed:
EMM database error (196)


I am on NetBackup 6.5.4
Attempting to add new media to my tape silo I encounter the above error. Fortunately the problem was easily resolved, but not before spending most of the day researching and on the phone with technical support.
What causes this error? If you have tapes that have been Inventoried by a previous version of NetBackup and have since upgraded or re-installed the master server application. This was a new installation of NetBackup 6.5.4 and I am using unused tapes previously inventoried under the former install of 6.5.2 which had been upgraded from version 5.2. The error only occurs because I am adding new tapes to the inventory and there are existing tapes (used or not) already in the silo that were inventoried once upon a time by a previous version. Why? I believe this is unique to the circumstances, and obviously a bug, but fortunately there is a solution.
On the Master Server type the following (exactly as shown and all on the same line):

# cd /opt/openv/netbackup/bin
# ./nbpushdata -add_flag NBPUSHDATA_G_GLOBDB_HOST dummyhostname:dummyhostname

This command only needs to be executed once. You do not need to restart the Netbackup services. Inventory the robot again and you should have success.

Tuesday, September 1, 2009

Moving to a to new domain. Oracle Database.

Moving your Oracle database instance to a to new domain? I recently moved several Oracle database servers to a new physical location. We changed domains and IP addresses in the move.

This is what I needed to do to update the production databases to the new domain.

Update the host information in both listener.ora and tnsnames.ora.

Edit

$TNSADMIN/listener.ora
$TNSADMIN/tnsnames.ora

Update Global Names:
Changing the Domain in a Global Database Name

Use the ALTER DATABASE statement to change the domain in a database global name. Note that after the database is created, changing the initialization parameter DB_DOMAIN has no effect on the global database name or on the resolution of database link names.

The following example shows the syntax for the renaming statement, where database is a database name and domain is the network domain:

SQL> ALTER DATABASE RENAME GLOBAL_NAME TO SID.DOMAIN.COM;


SQL> SELECT * FROM GLOBAL_NAME;

GLOBAL_NAME

----------------------------------------------------------------------------

SID.DOMAIN.COM


Updating DB_DOMAIN

SQL> show parameters db_domain

db_domain string OLD_DOMAIN.COM

SQL> alter system set db_domain='NEW_DOMAIN.COM' scope=spfile;

SQL>

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup
ORACLE instance started.


Verify the DB_DOMAIN was updated.

SQL> show parameters db_domain
db_domain string new_domain.com

SERVICE_NAME - Service name only needs to be updated if you have set a fixed name in the init.ora file. If you have not set it then it will automatically take on the SID + db_domain. This should match your Global Names parameter.

SQL> show parameters SERVICE_NAME
service_names string SID.domain.com

Renaming SERVICE_NAME
SQL> create pfile from spfile;

SQL> shutdown immediate;
Database closed.

Database dismounted.
ORACLE instance shut down.

Edit init.ora and remove service name from file.

SQL> startup pfile='/app/oracle/product/10.2/dbs/initSID.ora'

SQL> show parameters SERVICE_NAME
SQL> create spfile from pfile='/app/oracle/product/10.2/dbs/initSID.ora';
SQL> shutdown immediate;

SQL> startup
ORACLE instance started.

SQL> create pfile from spfile;
SQL> show parameters SERVICE_NAME;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string SID.DOMAIN.COM
SQL>

Database Links -

Drop and create all database links used to connect to any database with an updated domain.

Proactive Oracle DBA

This is a series of posts. I am working this to share some of the many scripts I schedule to automatically run to alert me of any current o...