Thursday, September 01, 2016

Static routing

Hi,
Trying to configure dNFS with multipath I ran into the need of routing traffic from a specific devices (10G), this article describing how to do this in the Operating System (Centos/Redhat/Oracle).

There are 2 different ways of configuring persistent static routes.

The first approach requires adding entries for each device (NIC) to the files
/etc/sysconfig/network-scripts/route-eth[X]
For eth0, the file would be
/etc/sysconfig/network-scripts/route-eth0

Example:
echo "192.168.10.30 via 192.168.10.10" >> /etc/sysconfig/network-scripts/route-eth0
echo "192.168.10.32 via 192.168.10.12" >> /etc/sysconfig/network-scripts/route-eth1

To add the routes instantly, run the script /etc/sysconfig/network-scripts/ifup-routes. It takes the device name as input. The script is always run at boot time so the static routes are recreated after reboots.

Example:
/etc/sysconfig/network-scripts/ifup-routes eth0
/etc/sysconfig/network-scripts/ifup-routes eth1

The second approach involves adding routes using the "route" statement. Executing the "route" statement will add the route.

To ensure the routes are persistent, one has to edit the /etc/rc.local file and add the static routes to this file. This file is run once at boot time. Add the following lines to the /etc/rc.local file.

Example:
route add -net 192.168.10.30 netmask 255.255.255.255 dev eth0
route add -net 192.168.10.32 netmask 255.255.255.255 dev eth1

Yossi

Thursday, June 16, 2016

Drop 'Zombie' diskgroup

View all mounted asm_diskgroups in the ASM instance
SYS@+ASM>select NAME,STATE from v$asm_diskgroup
2 /

NAME STATE
------------------------------ -----------
DATA2 MOUNTED
DATA3 MOUNTED
DATADG MOUNTED
View all defined asm_diskgroups in the current ASM instance
SYS@+ASM>show parameter disk

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups string BBXDG, DATADG, BBXDG3, BBXDG2,
BBXL7, BBXL7REDO, BBXL7ARCHIV
E
, DATA2, BBXL7512, BBXL74096,
BBXSTORAGE, L7BBX4096, BBXL7S
B, BBXL7RAWDG, TEST, BBX7ONLIN
ELOG, BBX7CONTROLFILE, BBXL7RA
WASM
, DATA3, BBXL7ASM512, BBX_
SP03, BBX_FDB

asm_diskstring string /dev/raw/raw*
Change spfile only to the mounted diskgroups
SYS@+ASM>alter system set asm_diskgroups='DATA2','DATA3','DATADG';

System altered.
Verify the change
SYS@+ASM>show parameter disk

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
asm_diskgroups string DATA2, DATA3, DATADG
asm_diskstring string /dev/raw/raw*
View all defined asm_diskgroups in the spfile
SYS@+ASM>show spparameter disk

SID NAME TYPE VALUE
-------- ----------------------------- ----------- ----------------------------
+ASM asm_diskgroups string BBX7CONTROLFILE
+ASM asm_diskgroups string BBX7ONLINELOG
+ASM asm_diskgroups string BBXDG
+ASM asm_diskgroups string BBXDG2
+ASM asm_diskgroups string BBXDG3
+ASM asm_diskgroups string BBXL7
+ASM asm_diskgroups string BBXL74096
+ASM asm_diskgroups string BBXL7512
+ASM asm_diskgroups string BBXL7ARCHIVE
+ASM asm_diskgroups string BBXL7ASM512
+ASM asm_diskgroups string BBXL7RAWASM
+ASM asm_diskgroups string BBXL7RAWDG
+ASM asm_diskgroups string BBXL7REDO
+ASM asm_diskgroups string BBXL7SB
+ASM asm_diskgroups string BBXSTORAGE
+ASM asm_diskgroups string BBX_FDB
+ASM asm_diskgroups string BBX_SP03
* asm_diskgroups string DATA2
+ASM asm_diskgroups string DATA2
+ASM asm_diskgroups string DATA3
* asm_diskgroups string DATA3
+ASM asm_diskgroups string DATADG
* asm_diskgroups string DATADG
+ASM asm_diskgroups string L7BBX4096
+ASM asm_diskgroups string TEST
* asm_diskstring string /dev/raw/raw*
Change spfile to the mounted diskgroups for the specific +ASM instance
SYS@+ASM>alter system set asm_diskgroups='DATA2','DATA3','DATADG' scope=spfile sid='+ASM';

System altered.
Verify the change
SYS@+ASM>show spparameter disk

SID NAME TYPE VALUE
-------- ----------------------------- ----------- ----------------------------
+ASM asm_diskgroups string DATA2
* asm_diskgroups string DATA2
* asm_diskgroups string DATA3
+ASM asm_diskgroups string DATA3
+ASM asm_diskgroups string DATADG
* asm_diskgroups string DATADG
* asm_diskstring string /dev/raw/raw*
View all defined asm_diskgroups in the Grid Infrastructure
[12:05:12 oracle@e11lora7:~/scripts +ASM ] $ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.BBX7CONTROLFILE.dg
ONLINE OFFLINE e11lora7
ora.BBXDG3.dg
ONLINE OFFLINE e11lora7
ora.BBXL7.dg
ONLINE OFFLINE e11lora7
ora.BBXL7512.dg
ONLINE OFFLINE e11lora7
ora.BBXL7ARCHIVE.dg
ONLINE OFFLINE e11lora7
ora.BBXL7ASM512.dg
ONLINE OFFLINE e11lora7
ora.BBXL7REDO.dg
ONLINE OFFLINE e11lora7
ora.BBXL7SB.dg
ONLINE OFFLINE e11lora7
ora.BBXSTORAGE.dg
ONLINE OFFLINE e11lora7
ora.BBX_FDB.dg
ONLINE OFFLINE e11lora7
ora.BBX_SP03.dg
ONLINE OFFLINE e11lora7
ora.DATA2.dg
ONLINE ONLINE e11lora7
ora.DATA3.dg
ONLINE ONLINE e11lora7
ora.DATADG.dg
ONLINE ONLINE e11lora7
ora.L7BBX4096.dg
ONLINE OFFLINE e11lora7
ora.LISTENER.lsnr
ONLINE ONLINE e11lora7
ora.asm
ONLINE ONLINE e11lora7 Started
ora.ons
OFFLINE OFFLINE e11lora7
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.bdb.db
1 ONLINE OFFLINE Instance Shutdown
ora.cssd
1 ONLINE ONLINE e11lora7
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE e11lora7
ora.fdb.db
1 ONLINE OFFLINE
ora.hdb.db
1 ONLINE OFFLINE Instance Shutdown
ora.rpadb.db
1 ONLINE OFFLINE
Trying to remove unnecessary diskgroup diskgroup
[12:06:39 oracle@e11lora7:~/scripts +ASM ] $ srvctl remove diskgroup -g BBX_FDB
PRCA-1002 : Failed to remove CRS resource ora.BBX_FDB.dg for ASM Disk Group BBX_FDB
PRCR-1028 : Failed to remove resource ora.BBX_FDB.dg
PRCR-1072 : Failed to unregister resource ora.BBX_FDB.dg
CRS-0222: Resource 'ora.BBX_FDB.dg' has dependency error.
Checking which database is configured to depend on the diskgroup
[12:08:20 oracle@e11lora7:~/scripts +ASM ] $ srvctl config database -d hdb
Database unique name: hdb
Database name: hdb
Oracle home: /oracle/app/product/11.2.0/dbhome
Oracle user: oracle
Spfile: +DATA3/hdb/spfilehdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: hdb
Disk Groups: DATA3,BBX_FDB
Services:
Removing the unnecessary diskgroup from the database (by omitting it)
[12:12:01 oracle@e11lora7:~/scripts +ASM ] $ srvctl modify database -d hdb -a "DATA3"
Verify the change
[12:12:07 oracle@e11lora7:~/scripts +ASM ] $ srvctl config database -d hdb
Database unique name: hdb
Database name: hdb
Oracle home: /oracle/app/product/11.2.0/dbhome
Oracle user: oracle
Spfile: +DATA3/hdb/spfilehdb.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: hdb
Disk Groups: DATA3
Services:
Trying again to remove unnecessary diskgroup diskgroup
[12:12:18 oracle@e11lora7:~/scripts +ASM ] $ srvctl remove diskgroup -g BBX_FDB
PRCA-1002 : Failed to remove CRS resource ora.BBX_FDB.dg for ASM Disk Group BBX_FDB
PRCR-1028 : Failed to remove resource ora.BBX_FDB.dg
PRCR-1072 : Failed to unregister resource ora.BBX_FDB.dg
CRS-0222: Resource 'ora.BBX_FDB.dg' has dependency error.
Trying again to remove but now with crsctl, maybe we will have a clue for this new dependency
[12:12:43 oracle@e11lora7:~/scripts +ASM ] $ crsctl delete resource 'ora.BBX_FDB.dg'
CRS-2730: Resource 'ora.fdb.db' depends on resource 'ora.BBX_FDB.dg'
CRS-4000: Command Delete failed, or completed with errors.
Checking if this database is also configured to use this diskgroup
[12:13:34 oracle@e11lora7:~/scripts +ASM ] $ srvctl config database -d fdb
Database unique name: fdb
Database name: fdb
Oracle home: /oracle/app/product/11.2.0/dbhome
Oracle user: oracle
Spfile:
Domain: axxana.local
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Database instance: fdb
Disk Groups: DATA2,BBX_SP03,BBX_FDB
Services:
Removing the unnecessary diskgroup from the database (by omitting it)
[12:13:52 oracle@e11lora7:~/scripts +ASM ] $ srvctl modify database -d fdb -a "DATA2"
Trying to remove unnecessary diskgroup
[12:14:07 oracle@e11lora7:~/scripts +ASM ] $ srvctl remove diskgroup -g BBX_FDB
Verify the change
[12:14:21 oracle@e11lora7:~/scripts +ASM ] $ crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.BBX7CONTROLFILE.dg
ONLINE OFFLINE e11lora7
ora.BBXDG3.dg
ONLINE OFFLINE e11lora7
ora.BBXL7.dg
ONLINE OFFLINE e11lora7
ora.BBXL7512.dg
ONLINE OFFLINE e11lora7
ora.BBXL7ARCHIVE.dg
ONLINE OFFLINE e11lora7
ora.BBXL7ASM512.dg
ONLINE OFFLINE e11lora7
ora.BBXL7REDO.dg
ONLINE OFFLINE e11lora7
ora.BBXL7SB.dg
ONLINE OFFLINE e11lora7
ora.BBXSTORAGE.dg
ONLINE OFFLINE e11lora7
ora.BBX_SP03.dg
ONLINE OFFLINE e11lora7
ora.DATA2.dg
ONLINE ONLINE e11lora7
ora.DATA3.dg
ONLINE ONLINE e11lora7
ora.DATADG.dg
ONLINE ONLINE e11lora7
ora.L7BBX4096.dg
ONLINE OFFLINE e11lora7
ora.LISTENER.lsnr
ONLINE ONLINE e11lora7
ora.asm
ONLINE ONLINE e11lora7 Started
ora.ons
OFFLINE OFFLINE e11lora7
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.bdb.db
1 ONLINE OFFLINE Instance Shutdown
ora.cssd
1 ONLINE ONLINE e11lora7
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 ONLINE ONLINE e11lora7
ora.fdb.db
1 ONLINE OFFLINE
ora.hdb.db
1 ONLINE OFFLINE Instance Shutdown
ora.rpadb.db
1 ONLINE OFFLINE

Yehh.... :)

Tuesday, May 10, 2016

Extract Data Guard Commands

Hi,
I found a great code for extracting data guard commands (reverse engineering)
I adjusted it to oracle 12.1 with far sync and it working fine.

Just remember to run on Primary & Far Sync instances the following command:
1. SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_n='' scope=both sid='*';
(where n is 2 or above)
2. DGMGRL> disable configuration;
3. DGMGRL> remove configuration;

and then run the script:
displayconfig.sql
SET SERVEROUTPUT ON
SET LINESIZE 300
SET FEEDBACK OFF

DECLARE
rid INTEGER;
indoc VARCHAR2 ( 4000 );
outdoc VARCHAR2 ( 4000 );
p INTEGER;
z XMLTYPE;
y CLOB;
v_xml XMLTYPE;
tout VARCHAR2 ( 4000 );
db_type VARCHAR2 ( 10 );
db_headers_commands clob;
db_commands clob;
db_commands_RedoRoutes clob;
BEGIN
indoc := '';
y := NULL;
rid := dbms_drs.do_control ( indoc );

outdoc := NULL;
p := 1;

WHILE ( outdoc IS NULL )
LOOP
outdoc := dbms_drs.get_response ( rid, p );
y := y || TO_CLOB ( outdoc );
END LOOP;

BEGIN
WHILE ( outdoc IS NOT NULL )
LOOP
p := p + 1;

outdoc := dbms_drs.get_response ( rid, p );
y := y || TO_CLOB ( outdoc );
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
NULL;
END;

z := XMLType ( y );

dbms_drs.delete_request ( rid );

FOR l IN ( SELECT name
, role
, id
, connectdb
FROM XMLTABLE (
'/DRC/SITE/DATABASE'
PASSING z
COLUMNS name PATH '/DATABASE/@name'
, role PATH '/DATABASE/@role'
, id PATH '/DATABASE/@id'
, connectdb PATH '/DATABASE/@connect'
) )
LOOP
CASE l.role
WHEN 'PRIMARY'
THEN
outdoc := 'create configuration dr as primary database is ''' || l.name || ''' connect identifier is ' || l.connectdb || ';';
db_type := 'database';
--DBMS_OUTPUT.put_line ( outdoc );
db_headers_commands := db_headers_commands ||chr(10) || outdoc;
WHEN 'PHYSICAL'
THEN
outdoc := 'add database ''' || l.name || ''' as connect identifier is ' || l.connectdb || ';';
db_type := 'database';
----DBMS_OUTPUT.put_line ( outdoc );
db_headers_commands := db_headers_commands ||chr(10) || outdoc;
WHEN 'FAR_SYNC_INSTANCE'
THEN
outdoc := 'add far_sync ''' || l.name || ''' as connect identifier is ' || l.connectdb || ';';
db_type := 'far_sync';
--DBMS_OUTPUT.put_line ( outdoc );
db_headers_commands := db_headers_commands ||chr(10) || outdoc;
END CASE;

indoc := '';
y := NULL;
rid := dbms_drs.do_control ( indoc );

outdoc := NULL;
p := 1;

WHILE ( outdoc IS NULL )
LOOP
outdoc := dbms_drs.get_response ( rid, p );
y := y || TO_CLOB ( outdoc );
END LOOP;

BEGIN
WHILE ( outdoc IS NOT NULL )
LOOP
p := p + 1;

outdoc := dbms_drs.get_response ( rid, p );
y := y || TO_CLOB ( outdoc );
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND
THEN
NULL;
END;

z := XMLType ( y );

SELECT XMLQUERY ( '/RESOURCE/PROPERTY_LIST/VALUE' PASSING z RETURNING CONTENT ) INTO v_xml FROM DUAL;

FOR q
IN ( SELECT name, VALUE, property_type AS r
FROM XMLTABLE (
'/VALUE'
PASSING v_xml
COLUMNS name PATH '/VALUE/@name', VALUE PATH '/VALUE/@value', property_type PATH '/VALUE/@property_type'
)
WHERE VALUE IS NOT NULL AND
name IN ( 'AlternateLocation'
, 'ApplyLagThreshold'
, 'ApplyParallel'
, 'ArchiveLagTarget'
, 'Binding'
, 'DbFileNameConvert'
, 'DelayMins'
, 'DGConnectIdentifier'
, 'FastStartFailoverTarget'
, 'InstanceName'
, 'LogArchiveFormat'
, 'LogArchiveMaxProcesses'
, 'LogArchiveMinSucceedDest'
, 'LogArchiveTrace'
, 'LogFileNameConvert'
, 'LogShipping'
, 'LogXptMode'
, 'LogXptMode'
, 'MaxConnections'
, 'MaxFailure'
, 'NetTimeout'
, 'ObserverConnectIdentifier'
, 'PreferredApplyInstance'
, 'RedoCompression'
, 'RedoRoutes'
, 'ReopenSecs'
, 'StandbyArchiveLocation'
, 'StandbyFileManagement'
, 'StaticConnectIdentifier'
, 'TransportDisconnectedThreshold'
, 'TransportLagThreshold'
, 'ApplyInstanceTimeout' ) )
LOOP
IF db_type='far_sync' and q.name in ('DelayMins','ApplyInstanceTimeout','ApplyLagThreshold','ApplyParallel','StandbyFileManagement','ArchiveLagTarget','DbFileNameConvert','StaticConnectIdentifier')
THEN
NULL;
ELSE
IF q.name = 'RedoRoutes'
THEN
db_commands_RedoRoutes := 'edit ' || db_type || ' ' || l.name || ' set property ' || q.name || ' = ''' || q.VALUE || ''';'||chr(10)||db_commands_RedoRoutes;
ELSE
--DBMS_OUTPUT.put_line ( 'edit ' || db_type || ' ' || l.name || ' set property ' || q.name || ' = ''' || q.VALUE || ''';' );
db_commands := db_commands ||chr(10) || 'edit ' || db_type || ' ' || l.name || ' set property ' || q.name || ' = ''' || q.VALUE || ''';';
END IF;
END IF;
END LOOP;

dbms_drs.delete_request ( rid );
END LOOP;
DBMS_OUTPUT.put_line(db_headers_commands);
DBMS_OUTPUT.put_line(db_commands);
DBMS_OUTPUT.put_line('enable configuration;');
DBMS_OUTPUT.put_line(db_commands_RedoRoutes);
DBMS_OUTPUT.put_line('show configuration;');
END;
/
My original configuration
DGMGRL> show configuration

Configuration - dr

Protection Mode: MaxPerformance
Members:
pdb1 - Primary database
fdb1 - Far sync instance
sdb1 - Physical standby database
The script output:
create configuration dr as primary database is 'pdb1' connect identifier is pdb1;
add database 'sdb1' as connect identifier is sdb1;
add far_sync 'fdb1' as connect identifier is fdb1;

edit database pdb1 set property DGConnectIdentifier = 'pdb1';
edit database pdb1 set property LogXptMode = 'SYNC';
edit database pdb1 set property DelayMins = '0';
edit database pdb1 set property Binding = 'optional';
edit database pdb1 set property MaxFailure = '1';
edit database pdb1 set property MaxConnections = '1';
edit database pdb1 set property ReopenSecs = '15';
edit database pdb1 set property NetTimeout = '30';
edit database pdb1 set property RedoCompression = 'DISABLE';
edit database pdb1 set property LogShipping = 'ON';
edit database pdb1 set property ApplyInstanceTimeout = '0';
edit database pdb1 set property ApplyLagThreshold = '0';
edit database pdb1 set property TransportLagThreshold = '0';
edit database pdb1 set property TransportDisconnectedThreshold = '0';
edit database pdb1 set property ApplyParallel = 'AUTO';
edit database pdb1 set property StandbyFileManagement = 'AUTO';
edit database pdb1 set property ArchiveLagTarget = '0';
edit database pdb1 set property LogArchiveMaxProcesses = '4';
edit database pdb1 set property LogArchiveMinSucceedDest = '1';
edit database pdb1 set property DbFileNameConvert = 'pdb1, sdb1';
edit database pdb1 set property LogFileNameConvert = 'pdb1, sdb1';
edit database pdb1 set property FastStartFailoverTarget = 'sdb1';
edit database pdb1 set property InstanceName = 'pdb1';
edit database pdb1 set property StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=e20lora10)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=pdb1_DGMGRL)(INSTANCE_NAME=pdb1)(SERVER=DEDICATED)))';
edit database pdb1 set property StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST';
edit database pdb1 set property LogArchiveTrace = '0';
edit database pdb1 set property LogArchiveFormat = '%t_%s_%r.arc';
edit database sdb1 set property DGConnectIdentifier = 'sdb1';
edit database sdb1 set property LogXptMode = 'SYNC';
edit database sdb1 set property DelayMins = '0';
edit database sdb1 set property Binding = 'optional';
edit database sdb1 set property MaxFailure = '1';
edit database sdb1 set property MaxConnections = '1';
edit database sdb1 set property ReopenSecs = '15';
edit database sdb1 set property NetTimeout = '30';
edit database sdb1 set property RedoCompression = 'DISABLE';
edit database sdb1 set property LogShipping = 'ON';
edit database sdb1 set property ApplyInstanceTimeout = '0';
edit database sdb1 set property ApplyLagThreshold = '0';
edit database sdb1 set property TransportLagThreshold = '0';
edit database sdb1 set property TransportDisconnectedThreshold = '0';
edit database sdb1 set property ApplyParallel = 'AUTO';
edit database sdb1 set property StandbyFileManagement = 'AUTO';
edit database sdb1 set property ArchiveLagTarget = '0';
edit database sdb1 set property LogArchiveMaxProcesses = '4';
edit database sdb1 set property LogArchiveMinSucceedDest = '1';
edit database sdb1 set property DbFileNameConvert = 'pdb1, sdb1';
edit database sdb1 set property LogFileNameConvert = 'PDB1, sdb1, pdb1, sdb1';
edit database sdb1 set property FastStartFailoverTarget = 'pdb1';
edit database sdb1 set property InstanceName = 'sdb1';
edit database sdb1 set property StaticConnectIdentifier = '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=e20rora10)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=sdb1_DGMGRL)(INSTANCE_NAME=sdb1)(SERVER=DEDICATED)))';
edit database sdb1 set property StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST';
edit database sdb1 set property LogArchiveTrace = '0';
edit database sdb1 set property LogArchiveFormat = '%t_%s_%r.arc';
edit far_sync fdb1 set property DGConnectIdentifier = 'fdb1';
edit far_sync fdb1 set property LogXptMode = 'ASYNC';
edit far_sync fdb1 set property Binding = 'optional';
edit far_sync fdb1 set property MaxFailure = '1';
edit far_sync fdb1 set property MaxConnections = '1';
edit far_sync fdb1 set property ReopenSecs = '15';
edit far_sync fdb1 set property NetTimeout = '30';
edit far_sync fdb1 set property RedoCompression = 'DISABLE';
edit far_sync fdb1 set property LogShipping = 'ON';
edit far_sync fdb1 set property TransportLagThreshold = '0';
edit far_sync fdb1 set property TransportDisconnectedThreshold = '0';
edit far_sync fdb1 set property LogArchiveMaxProcesses = '4';
edit far_sync fdb1 set property LogArchiveMinSucceedDest = '1';
edit far_sync fdb1 set property LogFileNameConvert = 'PDB1, fdb1, pdb1, fdb1';
edit far_sync fdb1 set property InstanceName = 'fdb1';
edit far_sync fdb1 set property StandbyArchiveLocation = 'USE_DB_RECOVERY_FILE_DEST';
edit far_sync fdb1 set property LogArchiveTrace = '0';
edit far_sync fdb1 set property LogArchiveFormat = '%t_%s_%r.arc';
enable configuration;
edit far_sync fdb1 set property RedoRoutes = '(pdb1 : sdb1 ASYNC)(sdb1 : pdb1 ASYNC)';
edit database sdb1 set property RedoRoutes = '(LOCAL : fdb1 SYNC ALT =( pdb1 SYNC FALLBACK))';
edit database pdb1 set property RedoRoutes = '(LOCAL : fdb1 SYNC ALT =( sdb1 SYNC FALLBACK))';

show configuration;

Enjoy

Monday, November 02, 2015

Testing FAILOVER when primary database is not available

Send redo data from Primary database if MOUNT is possible.
ALTER SYSTEM FLUSH REDO TO target_db_name;
At Standby, if Flashback Database is not already enabled:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=4320; # 3 days
ALTER DATABASE FLASHBACK ON;
Create a restore point BEFORE_FAILOVER, The restore point will be used later to restore the database again to the same state before FAILOVER scenario.
set lines 300
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
CREATE RESTORE POINT before_failover GUARANTEE FLASHBACK DATABASE;
SELECT scn, guarantee_flashback_database, name FROM v$restore_point;

SCN GUARANTEE_FLASHBACK_DATABASE NAME
------- ---------------------------- ----------------
2619073 YES BEFORE_FAILOVER
Testing FAILOVER on physical standby database STBY.
SELECT name, db_unique_name, log_mode, protection_mode, database_role
FROM v$database;

NAME DB_UNIQUE_NAME LOG_MODE PROTECTION_MODE DATABASE_ROLE
---- -------------- ---------- ------------------- -------------
PRIM STBY ARCHIVELOG MAXIMUM PERFORMANCE PHYSICAL STANDBY

ALTER DATABASE FAILOVER TO STBY;

SELECT name, db_unique_name, log_mode, protection_mode, database_role
FROM v$database;

NAME DB_UNIQUE_NAME LOG_MODE PROTECTION_MODE DATABASE_ROLE
---- -------------- ---------- ------------------- -------------
PRIM STBY ARCHIVELOG MAXIMUM PERFORMANCE PRIMARY
STBY is now primary database. Following query shows how to check when the database was converted from physical standby into primary database.
SELECT to_char(standby_became_primary_scn) scn
FROM V$DATABASE;

SCN
----------------------------------------
7284214
Because flashback logs are available it’s possible to restore primary database STBY again into physical standby database. Flashback can be executed using two methods SCN number of restore point
Method with SCN number:
FLASHBACK DATABASE TO SCN standby_became_primary_scn;
Method with restore point:
FLASHBACK DATABASE TO RESTORE POINT restore_point_name;
Flashback primary database STBY and conversion into physical standby
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO RESTORE POINT before_failover;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
Start old primary database STBY to have the same state as before FAILOVER scenario
SELECT name, db_unique_name, log_mode, protection_mode, database_role
FROM v$database;

NAME DB_UNIQUE_NAME LOG_MODE PROTECTION_MODE DATABASE_ROLE
---- -------------- ---------- ------------------- ----------------
PRIM STBY ARCHIVELOG MAXIMUM PERFORMANCE PHYSICAL STANDBY

STARTUP;

SELECT name, db_unique_name, log_mode, protection_mode, database_role
FROM v$database;

NAME DB_UNIQUE_NAME LOG_MODE PROTECTION_MODE DATABASE_ROLE
---- -------------- ---------- ------------------- -------------
STBY STBY ARCHIVELOG MAXIMUM PERFORMANCE PRIMARY

Friday, October 23, 2015

Rolling Forward a Physical Standby Database Using the RECOVER FROM SERVICE

Hi
Since Oracle 12c we can recover standby only by accessing the standby database/site, no need to copy nor transfer any file
Short and easy - just copy and paste
enjoy :)

Standby Database Name: STBY
Primary Database Name: PRIM

On Standby
sqlplus / as sysdba
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT;
ALTER DATABASE MOUNT STANDBY DATABASE;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;

rman target /
RECOVER DATABASE FROM SERVICE PRIM USING COMPRESSED BACKUPSET NOREDO SECTION SIZE 120M;

sqlplus / as sysdba
SHUTDOWN IMMEDIATE;
STARTUP NOMOUNT;

rman target /
RESTORE STANDBY CONTROLFILE FROM SERVICE PRIM;

sqlplus / as sysdba
ALTER DATABASE MOUNT STANDBY DATABASE;

rman target /
CATALOG START WITH '+DATA/STBY/DATAFILE/';
SWITCH DATABASE TO COPY;

sqlplus / as sysdba
set pages 0 verify off feed off term off echo off
spool /tmp/clear.sql
select distinct 'ALTER DATABASE CLEAR LOGFILE GROUP '||GROUP#||';' as cmd from v$logfile;
spool off
@/tmp/clear.sql
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

For Oracle 11g there is a longer version, see Steps to perform for Rolling Forward a Physical Standby Database using RMAN Incremental Backup. (Doc ID 836986.1)

Yossi

Monday, December 22, 2014

Problems in: FETCH FIRST n PERCENT ROWS ONLY

Wanting to count how many rows in emp table

SQL> SELECT COUNT (*) FROM emp;

COUNT(*)
----------
100

Cool... now let me see how much is 5 percent of emp

SQL> SELECT COUNT (*) FROM emp
2 FETCH FIRST 5 PERCENT ROWS ONLY;

COUNT(*)
----------
100

Mmmmm.... it can't be also 100 ... somthing is wierd ..
Let's try other way

SQL> SELECT COUNT (*)
2 FROM ( SELECT empno
3 FROM emp
4 FETCH FIRST 5 PERCENT ROWS ONLY);

COUNT(*)
----------
5

Cool, now it is working :)
Now let's try in PL/SQL

SQL> DECLARE
2 l_Percent_to_fetch PLS_INTEGER;
3 l_cnt PLS_INTEGER;
4 BEGIN
5 SELECT COUNT (*)
6 INTO l_cnt
7 FROM ( SELECT empno
8 FROM emp
9 FETCH FIRST l_Percent_to_fetch PERCENT ROWS ONLY);
10 END;
11 /
DECLARE
*
ERROR at line 1:
ORA-03113: end-of-file on communication channel
Process ID: 4480
Session ID: 196 Serial number: 37163

What!!!! can't I use variables ???
let's try hadcoded:

SQL> DECLARE
2 l_cnt PLS_INTEGER;
3 BEGIN
4 SELECT COUNT (*)
5 INTO l_cnt
6 FROM ( SELECT empno
7 FROM emp
8 FETCH FIRST 5 PERCENT ROWS ONLY);
9 END;
10 /

PL/SQL procedure successfully completed
.

Cool.. not so cool - I was forced to bypass a BUG :( :(

Friday, July 26, 2013

Export in Pl/Sql via DBMS_DATAPUMP

CREATE OR REPLACE PACKAGE maintenance
AS
   PROCEDURE export_myschema;
   PROCEDURE stop_job (job_name VARCHAR2, schema_name VARCHAR2);
   PROCEDURE stop_all_jobs;
   PROCEDURE import_schema (file_name VARCHAR2);
END maintenance;
CREATE OR REPLACE PACKAGE BODY maintenance
AS
   PROCEDURE export_myschema
   AS
      handle        NUMBER;
      file_name     VARCHAR2 (200);
      log_name      VARCHAR2 (200);
      JOBNAME       VARCHAR2 (200);
      file_prefix   VARCHAR2 (30);
      dir_name      VARCHAR2 (4000);
      l_fexists BOOLEAN;
      l_file_length PLS_INTEGER;
      l_block_size BINARY_INTEGER;
   BEGIN
      dir_name := 'EXPORT_DIR';
      file_prefix := 'myschema_prod_data_';
      file_name :=
         file_prefix || MOD (TO_NUMBER (TO_CHAR (SYSDATE, 'D')), 2) -- toggle 0 or 1                                                                   || '.dmp';
      log_name :=
            file_prefix
         || MOD (TO_NUMBER (TO_CHAR (SYSDATE, 'D')), 2)
         || '_'
         || TO_CHAR (SYSDATE, 'YYMMDDHH24MISS')
         || '.log';
      JOBNAME := 'EXPORT_JOB_' || TO_CHAR (SYSDATE, 'YYMMDDHH24MISS');
      UTL_FILE.fgetattr (dir_name,
                     file_name,
                     l_fexists,
                     l_file_lenght,
                     l_block_size);
      IF l_fexists
      THEN
       DBMS_OUTPUT.put_line ( 'file to backup:'|| file_name || ' already exists - deleting it...');
       UTL_FILE.fremove (dir_name, file_name);
      ELSE
       DBMS_OUTPUT.put_line ( 'file to backup:'|| file_name || ' does not exists - continuing...');
      END IF;
      handle :=
         DBMS_DATAPUMP.open (operation   => 'EXPORT',
                             job_mode    => 'SCHEMA',
                             job_name    => JOBNAME,
                             version     => 'LATEST');
      DBMS_DATAPUMP.ADD_FILE (
         handle      => handle,
         filename    => log_name,
         directory   => dir_name,
         filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
      );
      DBMS_DATAPUMP.ADD_FILE (
         handle      => handle,
         filename    => file_name,
         directory   => dir_name,
         filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
      );
      DBMS_DATAPUMP.METADATA_FILTER (handle   => handle,
                                     name     => 'SCHEMA_EXPR',
                                     VALUE    => 'IN (''MYSCHEMA'')');
      DBMS_DATAPUMP.METADATA_FILTER (
         handle        => handle,
         name          => 'NAME_EXPR',
         VALUE         => 'NOT IN (''EMP'',''DEPT'')',
         object_type   => 'TABLE'
      );
      DBMS_DATAPUMP.START_JOB (handle);
      DBMS_DATAPUMP.DETACH (handle);
   END export_myschema; 
   PROCEDURE stop_job (job_name VARCHAR2, schema_name VARCHAR2)
   AS
      handle     NUMBER;
      day_name   VARCHAR2 (3);
   BEGIN
      handle := DBMS_DATAPUMP.ATTACH (job_name, schema_name);
      DBMS_DATAPUMP.STop_JOB (handle, 1, 0);
   END stop_job;
   PROCEDURE stop_all_jobs
   IS
      handle   NUMBER;
   BEGIN
      FOR x IN (SELECT   * FROM DBA_datapump_jobs)
      LOOP
         BEGIN
            BEGIN
               EXECUTE IMMEDIATE   'drop table '
                                || x.owner_name
                                || '.'
                                || x.job_name;
               DBMS_OUTPUT.put_line (
                  'Table Job: ' || x.job_name || ' - Dropped'
               );
            EXCEPTION
               WHEN OTHERS
               THEN
                  DBMS_OUTPUT.put_line (
                     'Table Job: ' || x.job_name || ' - does not exists'
                  );
            END;
            handle := DBMS_DATAPUMP.ATTACH (x.job_name, x.owner_name);
            DBMS_DATAPUMP.STop_JOB (handle, 1, 0);
            DBMS_OUTPUT.put_line ('Job: ' || x.job_name || ' - DONE');
         EXCEPTION
            WHEN OTHERS
            THEN
               DBMS_OUTPUT.put_line (
                  'Job: ' || x.job_name || ' couldn''t removed'
               );
         END;
      END LOOP;
   END stop_all_jobs; 
   PROCEDURE import_schema (file_name VARCHAR2)
   AS
      handle      NUMBER;
      log_name    VARCHAR2 (200);
      JOBNAME     VARCHAR2 (200);
      dir_name    VARCHAR2 (4000);
   BEGIN
      dir_name := 'IMPORT_DIR';
  
      log_name := 'import_' || TO_CHAR (SYSDATE, 'YYMMDDHH24MISS') || '.log';
      JOBNAME := 'IMPORT_JOB_' || TO_CHAR (SYSDATE, 'YYMMDDHH24MISS');
      handle :=
         DBMS_DATAPUMP.open (operation   => 'IMPORT',
                             job_mode    => 'SCHEMA',
                             job_name    => JOBNAME,
                             version     => 'LATEST');
      DBMS_DATAPUMP.ADD_FILE (
         handle      => handle,
         filename    => log_name,
         directory   => dir_name,
         filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
      );
      DBMS_DATAPUMP.ADD_FILE (
         handle      => handle,
         filename    => file_name,
         directory   => dir_name,
         filetype    => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
      );
      DBMS_DATAPUMP.METADATA_FILTER (handle   => handle,
                                     name     => 'SCHEMA_EXPR',
                                     VALUE    => 'IN (''MYSCHEMA'')');
      DBMS_DATAPUMP.SET_PARAMETER (handle   => handle,
                                   name     => 'TABLE_EXISTS_ACTION',
                                   VALUE    => 'TRUNCATE');
      DBMS_DATAPUMP.SET_PARALLEL (handle => handle, degree => 4);
      DBMS_DATAPUMP.START_JOB (handle);
      DBMS_DATAPUMP.DETACH (handle);
   END import_schema; 
END maintenance;
/