Remember your dreams

Remember your dreams
Remember your dreams

Thursday, October 1, 2009

Veritas NetBackup - querying your media

To see what tapes you currently have in your storage unit, which pool they are assigned to, which ones are full, and how much is used of the others.

From the Master Server, change directories to goodies and run the available media command:

# cd /usr/openv/netbackup/bin/goodies
#./available_media

media media robot robot robot side/ ret size status ID type type # slot face level KBytes
-----------------------------------------------------------------------------------------------

CatalogBackup pool

xxx151 HCART TLD 0 47 - 3 69689341 ACTIVE
xxx152 HCART TLD 0 46 - 3 177652257 ACTIVE

NetBackup pool

xxx030 HCART TLD 0 28 - 6 477197502 ACTIVE/MPX
xxx139 HCART TLD 0 126 - 4 286852847 ACTIVE/MPX
xxx161 HCART TLD 0 59 - 4 58710487 ACTIVE/MPX
xxx195 HCART NONE - - - - - AVAILABLE
xxx007 HCART TLD 0 29 - 4 1439675239 FULL/MPX
xxx072 HCART TLD 0 128 - 4 1188893711 FULL/MPX
xxx134 HCART TLD 0 40 - 4 1287883499 FULL/MPX
xxx136 HCART TLD 0 100 - 4 1119822897 FULL/MPX
xxx137 HCART TLD 0 139 - 4 1365587550 FULL/MPX
xxx138 HCART TLD 0 140 - 4 1244368634 FULL/MPX
xxx144 HCART TLD 0 57 - 4 1101532193 FULL/MPX

Scratch pool

xxx162 HCART TLD 0 60 - - - AVAILABLE
xxx163 HCART TLD 0 109 - - - AVAILABLE
xxx164 HCART TLD 0 102 - - - AVAILABLE
xxx165 HCART TLD 0 111 - - - AVAILABLE
xxx166 HCART TLD 0 4 - - - AVAILABLE

Veritas NetBackup - Insert media failed: EMM database error (196)

Update failed: could not add new media ID 'XYZ166' into slot 4
Insert media failed:
EMM database error (196)


I am on NetBackup 6.5.4
Attempting to add new media to my tape silo I encounter the above error. Fortunately the problem was easily resolved, but not before spending most of the day researching and on the phone with technical support.
What causes this error? If you have tapes that have been Inventoried by a previous version of NetBackup and have since upgraded or re-installed the master server application. This was a new installation of NetBackup 6.5.4 and I am using unused tapes previously inventoried under the former install of 6.5.2 which had been upgraded from version 5.2. The error only occurs because I am adding new tapes to the inventory and there are existing tapes (used or not) already in the silo that were inventoried once upon a time by a previous version. Why? I believe this is unique to the circumstances, and obviously a bug, but fortunately there is a solution.
On the Master Server type the following (exactly as shown and all on the same line):

# cd /opt/openv/netbackup/bin
# ./nbpushdata -add_flag NBPUSHDATA_G_GLOBDB_HOST dummyhostname:dummyhostname

This command only needs to be executed once. You do not need to restart the Netbackup services. Inventory the robot again and you should have success.

Tuesday, September 1, 2009

Moving to a to new domain. Oracle Database.

Moving your Oracle database instance to a to new domain? I recently moved several Oracle database servers to a new physical location. We changed domains and IP addresses in the move.

This is what I needed to do to update the production databases to the new domain.

Update the host information in both listener.ora and tnsnames.ora.

Edit

$TNSADMIN/listener.ora
$TNSADMIN/tnsnames.ora

Update Global Names:
Changing the Domain in a Global Database Name

Use the ALTER DATABASE statement to change the domain in a database global name. Note that after the database is created, changing the initialization parameter DB_DOMAIN has no effect on the global database name or on the resolution of database link names.

The following example shows the syntax for the renaming statement, where database is a database name and domain is the network domain:

SQL> ALTER DATABASE RENAME GLOBAL_NAME TO SID.DOMAIN.COM;


SQL> SELECT * FROM GLOBAL_NAME;

GLOBAL_NAME

----------------------------------------------------------------------------

SID.DOMAIN.COM


Updating DB_DOMAIN

SQL> show parameters db_domain

db_domain string OLD_DOMAIN.COM

SQL> alter system set db_domain='NEW_DOMAIN.COM' scope=spfile;

SQL>

SQL> shutdown immediate;

Database closed.

Database dismounted.

ORACLE instance shut down.

SQL> startup
ORACLE instance started.


Verify the DB_DOMAIN was updated.

SQL> show parameters db_domain
db_domain string new_domain.com

SERVICE_NAME - Service name only needs to be updated if you have set a fixed name in the init.ora file. If you have not set it then it will automatically take on the SID + db_domain. This should match your Global Names parameter.

SQL> show parameters SERVICE_NAME
service_names string SID.domain.com

Renaming SERVICE_NAME
SQL> create pfile from spfile;

SQL> shutdown immediate;
Database closed.

Database dismounted.
ORACLE instance shut down.

Edit init.ora and remove service name from file.

SQL> startup pfile='/app/oracle/product/10.2/dbs/initSID.ora'

SQL> show parameters SERVICE_NAME
SQL> create spfile from pfile='/app/oracle/product/10.2/dbs/initSID.ora';
SQL> shutdown immediate;

SQL> startup
ORACLE instance started.

SQL> create pfile from spfile;
SQL> show parameters SERVICE_NAME;

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
service_names string SID.DOMAIN.COM
SQL>

Database Links -

Drop and create all database links used to connect to any database with an updated domain.

Friday, March 20, 2009

Oracle Database Link

This is how I create a database link or dblink in Oracle. Using the following format, be sure to use the host name or use an IP address of the desired database server. This is the best way to insure it will connect. You need to update the text in RED to your specific environment.

CREATE PUBLIC DATABASE LINK NAME.DOMAIN.COM
CONNECT TO USERID
IDENTIFIED BY PASSWORD
USING '(description=(address=(protocol=TCP)
(host=hostdb_server.com)(port=1521))
(connect_data=(sid=SID)))'
/

The domain.com should match your db_domain parameter and is required if this parameter is set.
SQL> show parameters db_domain
NAME TYPE VALUE
------------- --------- ------------
db_domain string domain.com

Thursday, March 12, 2009

Disassociating Physical Standby Database from the Primary Database

I will be getting a new database server to use as a physical standby. My production database currently has an existing physical standby database associated with it. I will turn my current standby database into a test standby instance before configuring a new standby for the production instance.

These are the steps to disassociating a physical standby database from the Primary Database:

Login to the production instance.

sqlplus
SQL> create pfile from spfile;
SQL> quit

-- save original Standby configuration init file
$ORACLE_HOME/dbs/cp initSID.ora initSID.date

sqlplus
SQL> alter system set log_archive_dest_state_2=DEFER scope=both;
SQL> alter system set log_archive_dest_2='LOCATION=NULL' scope=both;
SQL> alter system set log_archive_config='NODG_CONFIG' scope=both;
SQL> alter system set fal_client=NULL scope=both;
SQL> alter system set fal_server=NULL scope=both;
SQL> alter system set log_archive_dest_1='LOCATION=/u02/oradata/arch' scope=both;

SQL> alter system switch logfile;

SQL> create pfile from spfile;

SQL> select switchover_status from v$database;
SWITCHOVER_STATUS
--------------------

SESSIONS ACTIVE

SQL> shutdown immediate


SQL> select database_role from v$database;
DATABASE_ROLE
----------------
PRIMARY

SQL> select switchover_status from v$database;
SWITCHOVER_STATUS
--------------------
NOT ALLOWED


ORA-32017: failure in updating SPFILE

ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_2" not allowed with SPFILE

Modifying log_archive_dest_n will return this error due to invalid syntax. This parameter is dynamic and can be updated using the ALTER SYSTEM statement.

Here is an example of an error raised when trying up modify the parameter log_archive_dest_2 to NULL.

SQL> alter system set log_archive_dest_2=NULL scope=both;
alter system set log_archive_dest_2=NULL
*
ERROR at line 1:
ORA-32017: failure in updating SPFILE
ORA-16179: incremental changes to "log_archive_dest_2" not allowed with SPFILE

To update this parameter to NULL or to a valid location use the following syntax:
SQL> alter system set log_archive_dest_2='LOCATION=NULL' scope=both;
or
SQL> alter system set log_archive_dest_2='LOCATION=/u02/oradata/arch' scope=both;

Tuesday, February 17, 2009

Creating an NFS mount for use with RMAN

I needed more disk space on my database server for exports and RMAN backups (possible Flashback space). My colleague has an available Network File System (NFS), as he calls it, Network-attached storage (NAS). I have been given a few terabytes for use with my database. I will mount this same NAS on my production and development database servers for use with exports and RMAN backups.

Once the share is available:
1. Edit /etc/vfstab and add the mount.
Adding mount to vfstab

2. Add the directory to the server.
#cd /
#mkdir oracle_data

3. #mount -a

4. #chown oracle:oinstall oracle_data

Monday, February 16, 2009

ORA-27054: NFS file system where the file is created or resides is not mounted with correct options

RMAN error

ORA-27054: NFS file system where the file is created or resides is not mounted with correct options
Additional information: 2

I am using an NFS share for RMAN backups. After mounting the file system and trying testing a backup using RMAN I received the ORA-27054 error. This is due to the fact that mount was not established correctly. The following is the correct syntax for mounting NFS for RMAN to utilize. Since I am mounting two server versions of Solaris; 5.8 and 5.10 to the same NFS share.

Oracle 10gR2 on UNIX,

For SUN Solaris 5.8 sparc 64 or Solaris 5.9
mount rw,bg,intr, hard,timeo=600, wsize=32768, rsize=32768

Actual entry (all one line) in the vfstab file (where bigshare is the server):
bigshare:/export/oracle - /oracle_backups nfs - yes rw,bg,intr,hard,timeo=600,wsize=32768,rsize=32768

For Solaris 5.10 use
mount -F nfs -o hard, bg,proto=tcp, vers=3, suid, rsize=32768, wsize=32768, intr,noac

Actual entry (all one line) in the vfstab file (where bigshare is the server):
bigshare:/export/oracle - /oracle_backups nfs - yes hard, bg,proto=tcp, vers=3, suid, rsize=32768, wsize=32768, intr,noac

Tuesday, February 3, 2009

Indexing - Using functions in the WHERE Clause

Using a function on an indexed column in the WHERE clause will force the optimizer to bypass the desired index (does not apply to function-based indexes).
Common functions used in the WHERE clause are TRUNC, SUBSTR, TO_DATE, TO_CHAR, and INSTR.
The one I am guilty of is using TRUNC on a date column, as in (assuming there is an index on insert_date):

WHERE TRUNC(insert_date) = TRUNC(sysdate);
instead use something like:
WHERE insert_date >= TRUNC(sysdate) and insert_date <>
By taking the function TRUNC off of the indexed column, insert_date, the index on insert_date will be used.


Example:
SQL> set timing on

On our first run the data has not been cached and it take 5 seconds.
SQL> select count(last_updated) "Total" from barcodes where trunc(last_updated) > trunc(sysdate-60);

Total
-------------------
2114
Elapsed: 00:00:05.10

Same SQL but the data now in cache it takes 2.5 seconds.
SQL> select count(last_updated) "Total" from barcodes where trunc(last_updated) > trunc(sysdate-60);

Total
-------------------
2114
Elapsed: 00:00:02.53

Rewrite query to use the index on last_updated and the query takes 0.0 seconds.
SQL> select count(last_updated) "Total" from barcodes where last_updated > trunc(sysdate)-60 and last_updated <>

Total
-------------------
2114
Elapsed: 00:00:00.00

You can't always assume you will use an index just becuase it is available.



Friday, January 16, 2009

ORA-00354:, ORA-16014:

ORA-00354:
How to recover from
ORA-00354: corrupt redo log block header
also
ORA-00312: online log 1 thread 1: '/u02/oradata/DEV/redo01.log'
ORA-16038: log 1 sequence# 2530 cannot be archived
ORA-00354: corrupt redo log block header
also

SQL> alter database mount;
SQL> alter database open resetlogs;
SQL> alter database open;

ERROR at line 1:
ORA-16014: log 1 sequence# 2530 not archived, no available destinations
ORA-00312: online log 1 thread 1: '/u02/oradata/DEV/redo01.log'

I have not tested with the database open. I crashed this instance and could not open due to the corrupted redo log. In mount state issue the following commands for your specific redo logs with corruption.

SQL> alter database clear unarchived logfile '/u02/oradata/DEV/redo01.log';
SQL> alter database clear unarchived logfile '/u03/oradata/DEV/redo02.log';
SQL> alter database clear unarchived logfile '/u04/oradata/DEV/redo03.log';
SQL> alter database open;


I restarted to be sure everything was good.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.

Thursday, January 15, 2009

_disable_logging = TRUE


ORA-00354: corrupt redo log block header
ORA-00353: log corruption near block 39540 change 3193784747 time 01/13/2009 22:08:29
ORA-00312: online log 1 thread 1: '/u02/oradata/DEV/redo01.log'
ORA-16038: log 1 sequence# 2530 cannot be archived


Beware of the hiddden parameter _disable_logging

My goal was to speed up an import of data by disabling archive logging without turning off the Archiver. Of course in hindsight I know now that this was a bad idea and simply puting the database in noarchivelog mode would have been much simpler. But it's always fun to learn new things.


SQL> alter database noarchivelog;

SQL> alter database open;


Once I set the hidden parameter _disable_logging=TRUE, I kicked of the import and pretty quick started recieving alert log errors notifying me of corrupt redo log block headers. My advice - do not use this parameter!

How to recover from
ORA-00354: corrupt redo log block header


Proactive Oracle DBA

This is a series of posts. I am working this to share some of the many scripts I schedule to automatically run to alert me of any current o...