一、使用工具:Oracle Solaris 、SUNW.HAStoragePlus。
为会昌等地区用户提供了全套网页设计制作服务,及会昌网站建设行业解决方案。主营业务为成都网站建设、做网站、会昌网站设计,以传统方式定制建设网站,并提供域名空间备案等一条龙服务,秉承以专业、用心的态度为用户提供真诚的服务。我们深信只要达到每一位用户的要求,就会得到认可,从而选择与我们长期合作。这样,我们也可以走得更远!
二、配置方法及步骤:
1、注册 SUNW.HAStoragePlus (HASP) 资源类型。
phys-schost-1# /usr/cluster/bin/clrt register SUNW.HAStoragePlus
2、创建故障切换资源组。
phys-schost-1# /usr/cluster/bin/clrg create sol-kz-fz1-rg
3、创建一个 HAStoragePlus 资源来监视用作内核区域的引导或挂起设备的磁盘。
root@phys-schost-1:~# clrs create -t SUNW.HAStoragePlus -g sol-kz-fz1-rg \
-p GlobalDevicePaths=dsk/d7,dsk/d8 sol-kz-fz1-hasp-rs
root@phys-schost-1:~# /usr/cluster/bin/clrg online -emM -n phys-schost-1 \ sol-kz-fz1-rg
4、在 phys-schost-1 上创建并配置区域。必须确保引导和挂起设备位于共享磁盘上。为了配置双节点集群,在
phys-schost-1 上执行以下命令,然后将区域配置复制到
phys-schost-2。
root@phys-schost-1:~# zonecfg -z sol-kz-fz1 'create -b; set brand=solaris-kz;
add capped-memory;
set physical=2G; end; add device;
set storage=dev:did/dsk/d7; set bootpri=1; end; add suspend; set
storage=dev:did/dsk/d8; end; add anet; set lower-link=auto; end; set autoboot=false; add attr;
set name=osc-ha-zone; set type=boolean; set value=true; end;'
5、验证区域已配置。
phys-schost-1# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- sol-kz-fz1 configured - solaris-kz excl
6、使用 zoneadm 安装区域,然后引导区域。
root@phys-schost-1:~# zoneadm -z sol-kz-fz1 install
Progress being logged to /var/log/zones/zoneadm.20140829T212403Z.sol-kz-fz1.install
pkg cache: Using /var/pkg/publisher.
Install Log: /system/volatile/install.4811/install_log
AI Manifest: /tmp/zoneadm4203.ZLaaYi/devel-ai-manifest.xml
SC Profile: /usr/share/auto_install/sc_profiles/enable_sci.xml
Installation: Starting ...
Creating IPS image
Installing packages from:
solaris
origin:
ha-cluster
origin:
The following licenses have been accepted and not displayed.
Please review the licenses for the following packages post-install:
consolidation/osnet/osnet-incorporation
Package licenses may be viewed using the command:
pkg info --license pkg_fmri
DOWNLOAD PKGS FILES XFER (MB) SPEED
Completed 482/482 64261482/482 64261/64261 544.1/544.1 1.9M/s
PHASE ITEMS
Installing new actions 87569/87569
Updating package state database Done
Updating package cache 0/0
Updating image state Done
Creating fast lookup database Done
Installation: Succeeded
Done: Installation completed in 609.014 seconds.
7、验证区域是否已成功安装并引导。
phys-schost-1# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- sol-kz-fz1 installed - solaris-kz excl
8、在另一个窗口中,登录区域控制台并引导区域。按照各系统配置交互屏幕上的提示配置区域。
phys-schost-1# zlogin -C sol-kz-fz1
phys-schost-1# zoneadm -z sol-kz-fz1 boot
9、关闭区域并将资源组切换到资源组节点列表中的另一个节点。
phys-schost-1# zoneadm -z sol-kz-fz1 shutdown
phys-schost-1# zoneadm -z sol-kz-fz1 detach -F
phys-schost-1# /usr/cluster/bin/clrg switch -n phys-schost-2 sol-kz-fz1-rg
phys-schost-1# zoneadm list -cv
ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
- sol-kz-fz1 configured - solaris-kz excl
10、将区域配置复制到第二个节点,使用配置文件在第二个节点上创建内核区域。
root@phys-schost-1:~# zonecfg -z sol-kz-fz1 export -f \
/var/cluster/run/sol-kz-fz1.cfg
root@phys-schost-1:~# scp /var/cluster/run/sol-kz-fz1.cfg phys-schost- \
2:/var/cluster/run/
root@phys-schost-1:~# rm /var/cluster/run/sol-kz-fz1.cfg
root@phys-schost-2:~# zonecfg -z sol-kz-fz1 -f /var/cluster/run/sol-kz-\
fz1.cfg
root@phys-schost-2:~# rm /var/cluster/run/sol-kz-fz1.cfg
11、附加区域并验证区域是否可以在第二个节点上引导。从另一个会话登录,确保区域正常引导。
root@phys-schost-2:~# zoneadm -z sol-kz-fz1 attach -x force-takeover
root@phys-schost-2:~# zoneadm -z sol-kz-fz1 boot
root@phys-schost-2:~# zlogin -C sol-kz-fz1
12、关闭并分离区域。
root@phys-schost-2:~# zoneadm -z sol-kz-fz1 shutdown
root@phys-schost-2:~# zoneadm -z sol-kz-fz1 detach -F
13、安装故障切换区域代理(如果尚未安装)。
root@phys-schost-1# pkg install ha-cluster/data-service/ha-zones
root@phys-schost-2# pkg install ha-cluster/data-service/ha-zones
14、要从任何一个节点创建资源,请编辑 sczbt_config 文件并设置参数,如下所示。
root@phys-schost-2:~# clrt register SUNW.gds
root@phys-schost-2:~# cd /opt/SUNWsczone/sczbt/util
root@phys-schost-2:~# cp -p sczbt_config sczbt_config.sol-kz-fz1-rs
root@phys-schost-2:~# vi sczbt_config.sol-kz-fz1-rs
RS=sol-kz-fz1-rs
RG=sol-kz-fz1-rg
PARAMETERDIR=
SC_NETWORK=false
SC_LH=
FAILOVER=true
HAS_RS=RS=sol-kz-fz1-hasp-rs
RG=sol-kz-fz1-rg
Zonename="sol-kz-fz1"
Zonebrand="solaris-kz"
Zonebootopt=""
Milestone="svc:/milestone/multi-user-server"
LXrunlevel="3"
SLrunlevel="3"
Mounts=""
Migrationtype="warm"
15、要配置区域引导资源,请在区域引导配置文件中设置参数。
root@phys-schost-2:~# ./sczbt_register -f ./sczbt_config.kz
sourcing ./sczbt_config.kz
Registration of resource kz-rs succeeded.
root@phys-schost-2:~# /usr/cluster/bin/clrs enable sol-kz-fz1-rs
16、检查资源组和资源的状态。
root@phys-schost-2:~# /usr/cluster/bin/clrs status -g sol-kz-fz1-rg
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------------- ------------- ----- -------------------
sol-kz-fz1-rs phys-schost-1 Online Online - Service is online.
phys-schost-2 Offline Offline
sol-kz-fz1-hasp-rs phys-schost-1 Online Online
phys-schost-2 Offline Offline
root@phys-schost-2:~#
17、使用 zlogin -C sol-kz-fz1
命令登录,验证该区域是否成功引导,然后切换到另一个节点以测试故障切换。
root@phys-schost-2:~# /usr/cluster/bin/clrg switch -n phys-schost-1 sol-kz-fz1-rg
root@phys-schost-2:~# /usr/cluster/bin/clrs status -g sol-kz-fz1-rg
=== Cluster Resources ===
Resource Name Node Name State Status Message
------------------- ---------- ----- -------------------
sol-kz-fz1-rs phys-schost-1 Online Online
phys-schost-2 Offline Offline
ha-zones-hasp-rs phys-schost-1 Online Online
phys-schost-2 Offline Offline
root@phys-schost-2:~#
三、注意事项:在故障区域配置中,区域的 zonepath 必须位于高可用文件系统上。
Oracle Goldengate目前支持主被动式的双向配置,换而言之OGG可以将来自于激活的主库的数据变化完全复制到从库中,从库在不断同步数据的同时已经为计划内的和计划外的outages做好了故障切换的准备,也就是我们说的Live Standby。这里我们重点介绍一下配置Oracle Goldengate Live Standby系统的步骤,和具体的故障切换过程。
如果自己搞不定可以找ASKMACLEAN专业ORACLE数据库修复团队成员帮您恢复!
SQL conn clinic/clinic
Connected.
SQL drop table tv;
create table tv (t1 int primary key,t2 int,t3 varchar2(30));
Table dropped.
SQL
Table created.
SQL drop sequence seqt1;
create sequence seqt1 start with 1 increment by 1;
Sequence dropped.
SQL SQL
Sequence created.
declare
rnd number(9,2);
begin
for i in 1..100000 loop
insert into tv values(seqt1.nextval,i*dbms_random.value,'MACLEAN IS TESTING');
commit;
end loop;
end;
/
/* 以上脚本在primary主库的某个应用账户下创建了测试用的数据,
接着我们可以使用各种工具将数据初始化到从库中,如果在这个过程中
希望实时在线数据迁移的话,可以参考《Goldengate实现在线数据迁移》
*/
/* 注意我们在Live Standby的环境中往往需要复制sequence序列,以保证切换到备库时业务可以正常进行 */
/* 初始化备库数据后,确保已与主库完全一致 */
primary :
SQL select sum(t2) from tv;
SUM(T2)
----------
2498624495
SQL select last_number from user_sequences;
LAST_NUMBER
-----------
100001
standby:
SQL select sum(t2) from tv;
SUM(T2)
----------
2498624495
SQL select last_number from user_sequences;
LAST_NUMBER
-----------
100001
以上完成准备工作后,我们可以进入到正式配置Goldengate live stanby的阶段,包括以下步骤:
配置由主库到备库的extract、replicat、data pump,该步骤同普通的单向复制没有太大的区别
配置由备库到主库的extract、replicat、data pump
启动由主库到备库的extract、replicat、data pump
接下来我们会实践整个配置过程:
1.
创建由主库到备库的extract、data pump、replicat
GGSCI (rh2.oracle.com) 10 dblogin userid maclean
Password:
Successfully logged into database.
GGSCI (rh2.oracle.com) 11 add trandata clinic.*
Logging of supplemental redo data enabled for table CLINIC.TV
GGSCI (rh2.oracle.com) 4 add extract extstd1,tranlog,begin now
EXTRACT added.
GGSCI (rh2.oracle.com) 5 add exttrail /d01/ext/cl,megabytes 100,extract extstd1
EXTTRAIL added.
GGSCI (rh2.oracle.com) 7 view params extstd1
-- Identify the Extract group:
EXTRACT extstd1
-- Specify database login information as needed for the database:
userid maclean, password maclean
-- Specify the local trail that this Extract writes to:
EXTTRAIL /d01/ext/cl
-- Specify sequences to be captured:
SEQUENCE clinic.seqt1;
-- Specify tables to be captured:
TABLE clinic.*;
-- Exclude specific tables from capture if needed:
-- TABLEEXCLUDE
GGSCI (rh2.oracle.com) 17 add extract pumpstd1,exttrailsource /d01/ext/cl,begin now
EXTRACT added.
GGSCI (rh2.oracle.com) 98 add rmttrail /d01/rmt/cl,megabytes 100,extract pumpstd1
RMTTRAIL added.
GGSCI (rh2.oracle.com) 129 view params pumpstd1
-- Identify the data pump group:
EXTRACT pumpstd1
userid maclean, password maclean
-- Specify database login information as needed for the database:
userid maclean, password maclean
RMTHOST rh3.oracle.com, MGRPORT 7809
-- Specify the remote trail on the standby system:
RMTTRAIL /d01/rmt/cl
-- Pass data through without mapping, filtering, conversion:
PASSTHRU
sequence clinic.seqt1;
Table clinic.*;
在备库上配置由主库到备库的replicat:
GGSCI (rh3.oracle.com) 4 add replicat repstd1,exttrail /d01/rmt/cl,begin now
REPLICAT added.
GGSCI (rh3.oracle.com) 49 view params repstd1
-- Identify the Replicat group:
REPLICAT repstd1
-- State that source and target definitions are identical:
ASSUMETARGETDEFS
-- Specify database login information as needed for the database:
userid maclean, password maclean
-- Specify tables for delivery:
MAP clinic.*, TARGET clinic.*;
-- Exclude specific tables from delivery if needed:
-- MAPEXCLUDE
2.
创建由备库到主库的extract、data pump、replicat
GGSCI (rh3.oracle.com) 51 dblogin userid maclean
Password:
Successfully logged into database.
GGSCI (rh3.oracle.com) 52 add trandata clinic.*
Logging of supplemental redo data enabled for table CLINIC.TV.
/* 不要忘记在备库端的相关表加上追加日志 */
GGSCI (rh3.oracle.com) 53 add extract extstd2,tranlog,begin now
EXTRACT added.
GGSCI (rh3.oracle.com) 54 add exttrail /d01/ext/cl,megabytes 100,extract extstd2
EXTTRAIL added.
GGSCI (rh3.oracle.com) 58 view params extstd2
-- Identify the Extract group:
EXTRACT extstd2
-- Specify database login information as needed for the database:
userid maclean, password maclean
-- Specify the local trail that this Extract writes to:
EXTTRAIL /d01/ext/cl
-- Specify sequences to be captured:
SEQUENCE clinic.seqt1;
-- Specify tables to be captured:
TABLE clinic.*;
-- Exclude specific tables from capture if needed:
-- TABLEEXCLUDE
GGSCI (rh3.oracle.com) 59 add extract pumpstd2,exttrailsource /d01/ext/cl,begin now
EXTRACT added.
GGSCI (rh3.oracle.com) 60 add rmttrail /d01/rmt/cl,megabytes 100,extract pumpstd2
RMTTRAIL added.
GGSCI (rh3.oracle.com) 63 view params pumpstd2
-- Identify the data pump group:
EXTRACT pumpstd2
userid maclean, password maclean
-- Specify database login information as needed for the database:
userid maclean, password maclean
RMTHOST rh2.oracle.com, MGRPORT 7809
-- Specify the remote trail on the standby system:
RMTTRAIL /d01/rmt/cl
-- Pass data through without mapping, filtering, conversion:
PASSTHRU
sequence clinic.seqt1;
Table clinic.*;
在主库上配置replicat:
GGSCI (rh2.oracle.com) 136 add replicat repstd2,exttrail /d01/rmt/cl,begin now,checkpointtable maclean.ck
REPLICAT added.
GGSCI (rh2.oracle.com) 138 view params repstd2
-- Identify the Replicat group:
REPLICAT repstd2
-- State that source and target definitions are identical:
ASSUMETARGETDEFS
-- Specify database login information as needed for the database:
userid maclean, password maclean
-- Specify tables for delivery:
MAP clinic.*, TARGET clinic.*;
-- Exclude specific tables from delivery if needed:
-- MAPEXCLUDE
3.
完成以上OGG配置后,可以启动主库到备库的extract、pump、以及replicat:
GGSCI (rh2.oracle.com) 141 start extstd1
Sending START request to MANAGER ...
EXTRACT EXTSTD1 starting
GGSCI (rh2.oracle.com) 142 start pumpstd1
Sending START request to MANAGER ...
EXTRACT PUMPSTD1 starting
GGSCI (rh3.oracle.com) 70 start repstd1
Sending START request to MANAGER ...
REPLICAT REPSTD1 starting
/* 如果你是在offline状态下配置的话,那么此时可以启用应用了*/
接下来我们尝试做有计划的主备库切换演练:
1.
首先停止一切在主库上的应用,这一点和DataGuard Switchover一样。在保证没有活动事务的情况下,才能切换干净。
2.
在主库端使用LAG等命令了解extract的延迟,若返回如"At EOF, no more records to process"的信息,则说明所有事务均已被抽取。
GGSCI (rh2.oracle.com) 144 lag extstd1
Sending GETLAG request to EXTRACT EXTSTD1 ...
Last record lag: 0 seconds.
At EOF, no more records to process.
在EOF的前提下关闭extract:
GGSCI (rh2.oracle.com) 146 stop extstd1
Sending STOP request to EXTRACT EXTSTD1 ...
Request processed.
3.
同样对pump使用LAG命令,若返回如"At EOF, no more records to process"的信息,则说明已抽取的数据都被发送到备库了。
GGSCI (rh2.oracle.com) 147 lag pumpstd1
Sending GETLAG request to EXTRACT PUMPSTD1 ...
Last record lag: 3 seconds.
At EOF, no more records to process.
在EOF的前提下,关闭data pump
GGSCI (rh2.oracle.com) 148 stop pumpstd1
Sending STOP request to EXTRACT PUMPSTD1 ...
Request processed.
3.
检查备库端replicat的同步情况,如返回"At EOF, no more records to process.",则说明所有记录均被复制。
GGSCI (rh3.oracle.com) 71 lag repstd1
Sending GETLAG request to REPLICAT REPSTD1 ...
Last record lag: 5 seconds.
At EOF, no more records to process.
在EOF的前提下关闭replicat
GGSCI (rh3.oracle.com) 72 stop repstd1
Sending STOP request to REPLICAT REPSTD1 ...
Request processed.
4.
紧接着我们可以在备库上为业务应用用户赋予必要的insert、update、delete权限,启用各种触发器trigger及cascade delete约束等;
以上手段在主库上对应的操作是收回应用业务的权限,disable掉各种触发器及cascade delete约束,
之所以这样做是为了保证在任何时候扮演备库角色的数据库均不应当接受任何除了OGG外的手动的或者应用驱动的业务数据变更,
以保证主备库间的数据一致。
5.
修改原备库上的extract的启动时间到现在,已保证它不去抽取那些之前的重做日志
GGSCI (rh3.oracle.com) 75 alter extstd2 ,begin now
EXTRACT altered.
GGSCI (rh3.oracle.com) 76 start extstd2
Sending START request to MANAGER ...
EXTRACT EXTSTD2 starting
若之前没有启动由备库到主库的pump和replicat的话可以在此时启动:
GGSCI (rh3.oracle.com) 78 start pumpstd2
Sending START request to MANAGER ...
EXTRACT PUMPSTD2 starting
GGSCI (rh2.oracle.com) 161 start repstd2
Sending START request to MANAGER ...
REPLICAT REPSTD2 starting
6.此时我们可以正式启动在原备库现在的主库上的应用了
接下来我们尝试回切到原主库上:
1.前提步骤与之前的切换相似,首先停止在原备库上的任何应用,
之后使用LAG命令确认extract和replicat的进度,在确认后关闭extract和replicat。
完成在主库上的维护工作:包括赋予权限,启用触发器等等。
2.修改原主库上的extract的开始时间为当前,保证它不去处理之前的重做日志:
GGSCI (rh2.oracle.com) 165 alter extract extstd1,begin now
EXTRACT altered.
3.此时我们已经可以启动在原主库现在的主库上的应用了
4.启动最早配置的由主库到备库的extract、pump、replicat:
GGSCI (rh2.oracle.com) 166 start extstd1
Sending START request to MANAGER ...
EXTRACT EXTSTD1 starting
GGSCI (rh2.oracle.com) 171 start pumpstd1
Sending START request to MANAGER ...
EXTRACT PUMPSTD1 starting
GGSCI (rh3.oracle.com) 86 start repstd1
Sending START request to MANAGER ...
REPLICAT REPSTD1 starting
以上完成了OGG的Live Standby中主备库之间的计划内的切换Switchover,That's Great!
windows下面的话,修改注册表,可以搜索ORACLE_HOME,修改,可以在命令行下面使用set检查环境变量进行验证。
在linux下的话,修改.profile或者.bash_profile的路径,重新登录即可,可以在命令行下面使用set检查环境变量进行验证。
你这个属于主、备机切换
1. 察看主库状态
select switchover_status from v$database;
收集主库上的临时表空间的情况,原因是备库激活后临时文件可能丢失,需要手工建上去:
col file_name format a40
select file_name,tablespace_name,bytes/1024/1024 from dba_temp_files;
FILE_NAME TABLESPACE_NAME BYTES/1024/1024
---------------------------------------- -------------------- ---------------
/data/oradata/alihr/temp01.dbf TEMP 2048
2.切换主库到standby
alter database commit to switchover to physical standby;
或:
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY WITH SESSION SHUTDOWN;
shutdown immediate;
startup nomount;
alter database mount standby database;
3.验证要被切换的standby是否接收到switch to 通知
SELECT SWITCHOVER_STATUS FROM V$DATABASE;
以前的状态就是SESSIONS ACTIVE,现在就变为TO PRIMARY
4.切换物理standby到主用模式,检查redo log是否创建好
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
shutdown immediate;
startup;
5. standby数据库切换成主库后,检查是否需要、对临时表空间增加临时文件:
先检查临时文件是否丢失:
col file_name format a60
select file_name,tablespace_name from dba_temp_files;
把结果前面原主库上的临时文件进行对比,如有丢失则使用如下命令增加:
alter tablespace temp add tempfile '/data/oradata/alihr/temp02.dbf' size 2048M reuse;
正常情况下,如果db_file_name_convert参数设置正确的话,11g会自动建立temp file
6.在新的standby机器上
alter system set log_archive_dest_state_2='defer';
alter database recover managed standby database disconnect from session;
7.检查主备库中fal参数
fal_server服务名是在standby机器的tnsnames中,指向主库,fal_client是在主库上的tnsnames中,指向standby。
fal参数只在standby机器上生效,所以在standby机器上fal_server指向主库,fal_client机器指向备库.
而主库上的fal参数虽然不生效,但为了避免主库切换成standby时,我们还要修改fal参数,所以现在就设置好,
让fal_server指向现在的standby数据库,fal_client指向自己。
8.在新的主库
alter system set log_archive_dest_state_2='enable';
alter system archive log current;
然后到备库上检查备库机器上日志是否正常的传过来了。
9.检查和调整主备库两台机器中的crontab中的数据库备份脚本和删除归档脚本。
=================oracle11g的最大保护和最大可用模式下的切换==================
如果主库是意外宕机的,则直接把备库切换成主库:
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
shutdown immediate;
connect / as sysdba;
startup mount;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
shutdown immediate;
connect / as sysdba;
startup mount;
ALTER DATABASE SET STANDBY DATABASE TO MAXIMIZE AVAILABILITY;
alter database open;
注意:上面操作中是手工的把数据库重新置成MAXIMIZE AVAILABILITY,否则数据库起来后是最大性能模式。
检查数据库的角色:
select database_role from v$database;
原有主库启动后,如果不是硬盘坏,主库上的数据还存在,则把主库转换成standby就可以了:
主库上:
startup mount;
recover automatic database;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
shutdown immediate;
注意千万不要把主库打开了,否则会导致主库的SCN发生变化,无法转换成standby数据库了。
在把原先的主库转化为standby时,有时可能报如下错误:
SQL alter database recover managed standby database finish;
alter database recover managed standby database finish
*
ERROR at line 1:
ORA-00283: recovery session canceled due to errors
ORA-19909: datafile 1 belongs to an orphan incarnation
ORA-01110: data file 1: '/opt/oracle/oradata/oratest/system01.dbf'
这里把到rman中:
list incarnation of database;
reset database to incarnation 1;
recover database;
reset database to incarnation 2;
=========================================================
failover在物理standby的切换
1.检查standby看是否使用了standby log
2.有standby log,执行下面的命令
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH;
2.1 没有standby log则不执行上面的
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH SKIP STANDBY LOGFILE;
3. 切换
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
3.1 如果上面3步骤失败,则
ALTER DATABASE ACTIVATE STANDBY DATABASE;
4. 重启db
shutdown immediate
startup
switchover的方法:
主库和物理standby的切换
1.察看主库状态
SELECT SWITCHOVER_STATUS FROM V$DATABASE;
2.切换
ALTER DATABASE COMMIT TO SWITCHOVER TO PHYSICAL STANDBY;
3.原主库
shutdown immediate;
startup nomount
alter database mount standby database;
4.验证要被切换的standby是否接收到switch to 通知
SELECT SWITCHOVER_STATUS FROM V$DATABASE;
增加online redo日志
5.切换物理standby到主
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY;
shutdown immediate;
startup;
6.在新的standby机器(也就是老的主库)
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
7.在新的主库
alter system archive log current;
========active standby database===========
当:
alter database activate standby database;
原来的主库只能通过闪回转化成standby database,但要求数据库的flashback打开。
在新主库上:
select to_char(standby_became_primary_scn) from v$database;
在旧主库上:
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO SCN standby_became_primary_scn;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
select max(sequence#) from v$log_history where RESETLOGS_TIME=(select max(RESETLOGS_TIME) from v$log_history);
实例恢复:
recover managed standby database disconnect using current logfile;
=================lgwr设置====================================
alter system set log_archive_dest_3 = 'location=/disk3/arch/bopscha reopen=2 MAX_FAILURE=3';
alter system set log_archive_dest_state_3 = alternate;
alter system set log_archive_dest_1 = 'location=/disk2/arch/bopscha alternate=log_archive_dest_3 reopen=60 MAX_FAILURE=5' ;
*.log_archive_dest_2='SERVICE=DTMRT LGWR ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DTMRT';
alter database add standby logfile group 4 ('/oracle/u02/ORA10GDG/STANDBYRD01.LOG') size 200M;
select group#,thread#,sequence#,archived,status from v$standby_log;
alter database set standby database to maximize {availability | performance | protection};
select protection_mode from v$database;
ORA-19527:
LOG_FILE_NAME_CONVERT='/u02/oracle/oradata/oract/','/u02/oracle/oradata/oract/'
alter system set log_archive_dest_2='SERVICE=bopsteststb lgwr sync affirm';
select frequency, duration from v$redo_dest_resp_histogram where dest_id=2 and frequency1;
SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL ALTER DATABASE OPEN;
SQL ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
alter database convert to snapshot standby;
alter database convert to physical standby;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
alter database commit to switchover to physical standby;
alter database activate standby database finish apply;
alter database recover managed standby database finish;
旧主库转换成standby的步骤:
在新主库上:
select to_char(standby_became_primary_scn) from v$database;
在旧主库上:
SHUTDOWN IMMEDIATE;
STARTUP MOUNT;
FLASHBACK DATABASE TO SCN standby_became_primary_scn;
ALTER DATABASE CONVERT TO PHYSICAL STANDBY;
ALTER DATABASE COMMIT TO SWITCHOVER TO PRIMARY WITH SESSION SHUTDOWN;
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE FINISH FORCE;