Linux 环境下11.2.0.3 rac的快速卸载脚本

在Oracle 11.1和Oracle 10.1,10.2上,都是官方提供手工清理RAC环境的方法的(比如环境有问题,或者RAC安装失败,要清理后重新安装。虽然这些版本,也提供了卸载脚本,但是总是卸不干净,因此那个时候,更多的这种需求都是通过手工卸载完成的)。

从11.2开始,Oracle不推荐使用手工方式删除RAC环境,而是提供重新配置的脚本和专门的卸载包。但是我个人还是喜欢手工卸载(依据依然来源于 Oracle 的文档)。

之前写过基于AIX平台的,AIX环境下11.2 rac的快速卸载脚本

今天因为需要,写了Linux的,实测了一下,效果很好,测试环境:
OEL 6.5 + Oracle 11.2.0.3 RAC

手工清理rac环境,轻松还原裸系统(准备重新安装):

rm -rf /etc/oracle/
rm -f /etc/init.d/init.cssd
rm -f /etc/init.d/init.crs
rm -f /etc/init.d/init.crsd
rm -f /etc/init.d/init.evmd
rm -f /etc/rc2.d/K96init.crs
rm -f /etc/rc2.d/S96init.crs
rm -f /etc/rc3.d/K96init.crs
rm -f /etc/rc3.d/S96init.crs
rm -f /etc/rc5.d/K96init.crs
rm -f /etc/rc5.d/S96init.crs
rm -Rf /etc/oracle/scls_scr
rm -f /etc/inittab.crs
rm -f /etc/ohasd

rm -f /etc/oraInst.loc
rm -f /etc/oratab
rm -rf /tmp/.oracle
rm -rf /tmp/ora*
rm -rf /var/tmp/.oracle
rm -rf /tmp/CVU*
rm -rf /tmp/Ora*
rm -rf /home/grid/.oracle

rm -rf /u01/*

mv /etc/init.d/init.ohasd /etc/init.d/init.ohasd.bak

ps -ef | grep crs
ps -ef | grep evm
ps -ef | grep css

dd if=/dev/zero of=/dev/q9ocrlun01 bs=1M count=256

发表在 Installation and Deinstall | 标签为 , | 留下评论

11.2 RAC 修改了目录权限(u01)后crs不能启动的解决方法-2-使用root.sh重构crs

因此,下面我尝试比这个方法稍微科学一点点的方法2:重新执行节点1的root.sh,来尝试修复节点1的权限问题。
使用rootcrs.pl -deconfig删除crs配置信息:

[root@lunardb1 ohasd]# $GRID_HOME/crs/install/rootcrs.pl -deconfig
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Network exists: 1/10.8.8.0/255.255.252.0/eth4, type static
VIP exists: /lunardb1-vip/10.8.8.31/10.8.8.0/255.255.252.0/eth4, hosting node lunardb1
VIP exists: /lunardb2-vip/10.8.8.33/10.8.8.0/255.255.252.0/eth4, hosting node lunardb2
GSD exists
ONS exists: Local port 6100, remote port 6200, EM port 2016
PRCR-1065 : Failed to stop resource ora.lunardb1.vip
CRS-2529: Unable to act on 'ora.lunardb1.vip' because that would require stopping or relocating 'ora.LISTENER.lsnr', but the force option was not specified
PRCR-1014 : Failed to stop resource ora.net1.network
PRCR-1065 : Failed to stop resource ora.net1.network
CRS-2529: Unable to act on 'ora.net1.network' because that would require stopping or relocating 'ora.lunardb1.vip', but the force option was not specified

PRKO-2380 : VIP lunardb1 is still running on node: lunardb1
CRS-2673: Attempting to stop 'ora.registry.acfs' on 'lunardb1'
CRS-2677: Stop of 'ora.registry.acfs' on 'lunardb1' succeeded
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'lunardb1'
CRS-2673: Attempting to stop 'ora.crsd' on 'lunardb1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'lunardb1'
CRS-2673: Attempting to stop 'ora.LISTENER_DG.lsnr' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.ARCH.dg' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.DATA1.dg' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.REDODG.dg' on 'lunardb1'
CRS-2677: Stop of 'ora.ARCH.dg' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.LISTENER_DG.lsnr' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.lunardb1-dg-vip.vip' on 'lunardb1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.lunardb1.vip' on 'lunardb1'
CRS-2677: Stop of 'ora.DATA.dg' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.DATA1.dg' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.REDODG.dg' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.lunardb1-dg-vip.vip' on 'lunardb1' succeeded
CRS-2672: Attempting to start 'ora.lunardb1-dg-vip.vip' on 'lunardb2'
CRS-2677: Stop of 'ora.lunardb1.vip' on 'lunardb1' succeeded
CRS-2672: Attempting to start 'ora.lunardb1.vip' on 'lunardb2'
CRS-2676: Start of 'ora.lunardb1-dg-vip.vip' on 'lunardb2' succeeded
CRS-2676: Start of 'ora.lunardb1.vip' on 'lunardb2' succeeded
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'lunardb1'
CRS-2677: Stop of 'ora.asm' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.net2.network' on 'lunardb1'
CRS-2677: Stop of 'ora.net1.network' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.net2.network' on 'lunardb1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'lunardb1' has completed
CRS-2677: Stop of 'ora.crsd' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.evmd' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.asm' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'lunardb1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'lunardb1'
CRS-2677: Stop of 'ora.evmd' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.asm' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'lunardb1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.drivers.acfs' on 'lunardb1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'lunardb1'
CRS-2677: Stop of 'ora.cssd' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'lunardb1'
CRS-2677: Stop of 'ora.gipcd' on 'lunardb1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'lunardb1'
CRS-2677: Stop of 'ora.gpnpd' on 'lunardb1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'lunardb1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
You have new mail in /var/spool/mail/root
[root@lunardb1 ohasd]# 

使用root.sh重新配置crs:

[root@lunardb1 ohasd]# $GRID_HOME/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
User ignored Prerequisites during installation
OLR initialization - successful
Adding Clusterware entries to inittab
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node lunardb2, number 2, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
PRKO-2190 : VIP exists for node lunardb1, VIP name lunardb1-vip
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
You have new mail in /var/spool/mail/root
[root@lunardb1 ohasd]# 

配置结束后,可以看到,节点1的数据库是不能正常启动的:

[root@lunardb1 ohasd]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.DATA.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.DATA1.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.LISTENER_DG.lsnr
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.OCR_VOTE.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.REDODG.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.asm
               ONLINE  ONLINE       lunardb1                   Started             
               ONLINE  ONLINE       lunardb2                   Started             
ora.gsd
               OFFLINE OFFLINE      lunardb1                                       
               OFFLINE OFFLINE      lunardb2                                       
ora.net1.network
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.net2.network
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.ons
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.registry.acfs
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunardb2                                       
ora.cvu
      1        ONLINE  ONLINE       lunardb2                                       
ora.oc4j
      1        ONLINE  ONLINE       lunardb2                                       
ora.lunardb.db
      1        ONLINE  OFFLINE                               Instance Shutdown   
      2        ONLINE  ONLINE       lunardb2                   Open,Readonly       
ora.lunardb1-dg-vip.vip
      1        ONLINE  ONLINE       lunardb1                                       
ora.lunardb1.vip
      1        ONLINE  ONLINE       lunardb1                                       
ora.lunardb2-dg-vip.vip
      1        ONLINE  ONLINE       lunardb2                                       
ora.lunardb2.vip
      1        ONLINE  ONLINE       lunardb2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunardb2                                       
You have new mail in /var/spool/mail/root
[root@lunardb1 ohasd]# 

这个原因是很明显的,跟手工修改u01目录权限一文中的类似:

[root@lunardb1 ohasd]# su - oracle
[oracle@lunardb1 ~]$ ss

SQL*Plus: Release 11.2.0.3.0 Production on Sat Oct 4 20:23:05 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

ERROR:
ORA-12547: TNS:lost contact


Enter user-name: 

修改oracle二进制文件的权限:

[oracle@lunardb1 ~]$
[root@lunardb1 ohasd]# cd $GRID_HOME
[root@lunardb1 grid]# cd bin
[root@lunardb1 bin]# ll oracle
-rwxr-x--x 1 grid oinstall 204113496 Jun  7  2013 oracle
[root@lunardb1 bin]# chmod 6751 oracle
[root@lunardb1 bin]# ll oracle
-rwsr-s--x 1 grid oinstall 204113496 Jun  7  2013 oracle
[root@lunardb1 bin]# 

再次尝试启动数据库:

[oracle@lunardb1 ~]$ ss

SQL*Plus: Release 11.2.0.3.0 Production on Sat Oct 4 20:26:55 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

20:26:55 @>startup
ORACLE instance started.

Total System Global Area 1.6034E+11 bytes
Fixed Size                  2236968 bytes
Variable Size            3.0602E+10 bytes
Database Buffers         1.2939E+11 bytes
Redo Buffers              352468992 bytes
Database mounted.
Database opened.
20:27:40 @>

再回过头看看root.sh修改了哪些主要目录的权限:

[root@lunardb1 grid]# ll |grep root
drwxr-xr-x  2 root oinstall 12288 Oct  4 20:15 bin
drwxr-x---  4 root oinstall  4096 Jun  7  2013 crf
drwxr-xr-x 17 root oinstall  4096 Jun  7  2013 crs
drwxr-xr-x  3 root oinstall  4096 Jun  7  2013 ctss
drwxr-x---  3 root oinstall  4096 Jun  7  2013 gns
drwxr-xr-x  3 root oinstall 12288 Jun  7  2013 lib
drwxr-xr-x  3 root oinstall  4096 Jun  7  2013 ologgerd
drwxr-xr-x  3 root oinstall  4096 Jun  7  2013 osysmond
-rwxr-x---  1 grid oinstall   467 Jun  7  2013 root.sh
-rwxr-xr-x  1 grid oinstall   480 Jun  7  2013 rootupgrade.sh
[root@lunardb1 grid]# 

这些目录是11.2 RAC的基本服务资源。从11.2开始,GI中不再显示类似上面的基础服务资源,需要使用init参数来看:

[root@lunardb1 grid]# crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       lunardb1                   Started             
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       lunardb1                                       
ora.crf
      1        ONLINE  ONLINE       lunardb1                                       
ora.crsd
      1        ONLINE  ONLINE       lunardb1                                       
ora.cssd
      1        ONLINE  ONLINE       lunardb1                                       
ora.cssdmonitor
      1        ONLINE  ONLINE       lunardb1                                       
ora.ctssd
      1        ONLINE  ONLINE       lunardb1                   OBSERVER            
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.drivers.acfs
      1        ONLINE  ONLINE       lunardb1                                       
ora.evmd
      1        ONLINE  ONLINE       lunardb1                                       
ora.gipcd
      1        ONLINE  ONLINE       lunardb1                                       
ora.gpnpd
      1        ONLINE  ONLINE       lunardb1                                       
ora.mdnsd
      1        ONLINE  ONLINE       lunardb1                                       
You have new mail in /var/spool/mail/root
[root@lunardb1 grid]# 

从修改过程可以看出,感觉上,root.sh比第一种手工修改的方法科学一点,但是居然oracle二进制文件的权限还是没有修改好,那么其他的是否有细节问题,不好说。
总之,Oracle建议的方法,还是加减节点,让Oracle完全的重构这个节点的所有文件,以防止日后任何的CRS异常终止或者异常宕机等行为。

发表在 RAC | 标签为 , , | 留下评论

11.2 RAC 修改了目录权限(u01)后crs不能启动的解决方法–使用rootcrs.pl -init修复

还原节点损坏的场景:

[root@lunardb01 grid]# chown -R oracle:oinstall /u01
[root@lunardb01 grid]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@lunardb01 grid]# ps -ef|grep d.bin
root     27170     1  6 19:27 ?        00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid     27400     1  3 19:27 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root     27609 19818  0 19:27 pts/1    00:00:00 grep d.bin
[root@lunardb01 grid]# ps -ef|grep d.bin
root     27170     1  5 19:27 ?        00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid     27400     1  2 19:27 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root     27621 19818  0 19:27 pts/1    00:00:00 grep d.bin
[root@lunardb01 grid]# ps -ef|grep d.bin
root     27170     1  1 19:27 ?        00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid     27400     1  0 19:27 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root     28150 19818  0 19:28 pts/1    00:00:00 grep d.bin
[root@lunardb01 grid]# 

可以看到,此时crs起不来了,后台报错:

-----ohasd的报错:
2014-10-04 19:27:27.643: [   CRSPE][1148361024] {0:0:2} RI [ora.mdnsd 1 1] new internal state: [STARTING] old value: [STABLE]
2014-10-04 19:27:27.643: [   CRSPE][1148361024] {0:0:2} Sending message to agfw: id = 223
2014-10-04 19:27:27.644: [   CRSPE][1148361024] {0:0:2} CRS-2672: Attempting to start 'ora.mdnsd' on 'lunardb01'

2014-10-04 19:27:27.644: [    AGFW][1137854784] {0:0:2} Agfw Proxy Server received the message: RESOURCE_START[ora.mdnsd 1 1] ID 4098:223
2014-10-04 19:27:27.644: [    AGFW][1137854784] {0:0:2} Creating the resource: ora.mdnsd 1 1
2014-10-04 19:27:27.644: [    AGFW][1137854784] {0:0:2} Initializing the resource ora.mdnsd 1 1 for type ora.mdns.type
2014-10-04 19:27:27.644: [    AGFW][1137854784] {0:0:2} SR: acl = owner:grid:rw-,pgrp:oinstall:rw-,other::r--,user:grid:rwx
2014-10-04 19:27:27.645: [   CRSPE][1148361024] {0:0:2} ICE has queued an operation. Details: Operation [START of [ora.gpnpd 1 1] on [lunardb01] : local=0, unplanned=00x2aaab00c68f0] cannot run cause it needs W lock for: WO for Placement Path RI:[ora.mdnsd 1 1] server [lunardb01] target states [ONLINE ], locked by op [START of [ora.mdnsd 1 1] on [lunardb01] : local=0, unplanned=00x2aaab00b72e0]. Owner: CRS-2683: It is locked by 'SYSTEM' for command 'Resource Autostart : lunardb01'
—crsd的报错:
2014-10-04 19:26:23.937: [ CRSCOMM][1158867264][FFAIL] Ipc: Couldnt clscreceive message, no message: 11
2014-10-04 19:26:23.938: [ CRSCOMM][1158867264] Ipc: Client disconnected.
2014-10-04 19:26:23.938: [ CRSCOMM][1158867264][FFAIL] IpcL: Listener got clsc error 11 for memNum. 1
2014-10-04 19:26:23.938: [ CRSCOMM][1158867264] IpcL: connection to member 1 has been removed
2014-10-04 19:26:23.938: [CLSFRAME][1158867264] Removing IPC Member:{Relative|Node:0|Process:1|Type:3}
2014-10-04 19:26:23.938: [CLSFRAME][1158867264] Disconnected from AGENT process: {Relative|Node:0|Process:1|Type:3}
2014-10-04 19:26:23.938: [    AGFW][1165171008] {1:33686:190} Agfw Proxy Server received process disconnected notification, count=1
2014-10-04 19:26:23.939: [    AGFW][1165171008] {1:33686:190} /u01/app/11.2.0/grid/bin/oraagent_grid disconnected.
2014-10-04 19:26:23.939: [    AGFW][1165171008] {1:33686:190} Agent /u01/app/11.2.0/grid/bin/oraagent_grid[5646] stopped!
2014-10-04 19:26:23.939: [ CRSCOMM][1165171008] {1:33686:190} IpcL: removeConnection: Member 1 does not exist.
–alert的报错:
2014-10-04 19:27:23.293
[ohasd(27170)]CRS-2112:The OLR service started on node lunardb01.
2014-10-04 19:27:23.314
[ohasd(27170)]CRS-1301:Oracle High Availability Service started on node lunardb01.
2014-10-04 19:27:23.314
[ohasd(27170)]CRS-8017:location: /etc/oracle/lastgasp has 2 reboot advisory log files, 0 were announced and 0 errors occurred
2014-10-04 19:27:24.351
[/u01/app/11.2.0/grid/bin/orarootagent.bin(27307)]CRS-5016:Process "/u01/app/11.2.0/grid/bin/acfsload" spawned by agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/lunardb01/agent/ohasd/orarootagent_root/orarootagent_root.log"
2014-10-04 19:27:27.171
[/u01/app/11.2.0/grid/bin/orarootagent.bin(27307)]CRS-2302:Cannot get GPnP profile. Error CLSGPNP_NO_DAEMON (GPNPD daemon is not running).
2014-10-04 19:29:27.802
[/u01/app/11.2.0/grid/bin/oraagent.bin(27400)]CRS-5818:Aborted command 'start' for resource 'ora.mdnsd'. Details at (:CRSAGF00113:) {0:0:2} in /u01/app/11.2.0/grid/log/lunardb01/agent/ohasd/oraagent_grid//oraagent_grid.log.
2014-10-04 19:29:31.812
[ohasd(27170)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.mdnsd'. Details at (:CRSPE00111:) {0:0:2} in /u01/app/11.2.0/grid/log/lunardb01/ohasd/ohasd.log.
2014-10-04 19:31:34.907
[/u01/app/11.2.0/grid/bin/oraagent.bin(29240)]CRS-5818:Aborted command 'start' for resource 'ora.mdnsd'. Details at (:CRSAGF00113:) {0:0:2} in /u01/app/11.2.0/grid/log/lunardb01/agent/ohasd/oraagent_grid//oraagent_grid.log.
2014-10-04 19:31:38.918
[ohasd(27170)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.mdnsd'. Details at (:CRSPE00111:) {0:0:2} in /u01/app/11.2.0/grid/log/lunardb01/ohasd/ohasd.log.
2014-10-04 19:33:41.993
[/u01/app/11.2.0/grid/bin/oraagent.bin(30882)]CRS-5818:Aborted command 'start' for resource 'ora.gpnpd'. Details at (:CRSAGF00113:) {0:0:2} in /u01/app/11.2.0/grid/log/lunardb01/agent/ohasd/oraagent_grid//oraagent_grid.log.
2014-10-04 19:33:46.004
[ohasd(27170)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.gpnpd'. Details at (:CRSPE00111:) {0:0:2} in /u01/app/11.2.0/grid/log/lunardb01/ohasd/ohasd.log.

可以看到,卡在ora.mdnsd服务不能启动:

[root@lunardb01 grid]# crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  OFFLINE                               Instance Shutdown   
ora.cluster_interconnect.haip
      1        ONLINE  OFFLINE                                                   
ora.crf
      1        ONLINE  OFFLINE                                                   
ora.crsd
      1        ONLINE  OFFLINE                                                   
ora.cssd
      1        ONLINE  OFFLINE                                                   
ora.cssdmonitor
      1        ONLINE  OFFLINE                                                   
ora.ctssd
      1        ONLINE  OFFLINE                                                   
ora.diskmon
      1        ONLINE  OFFLINE                                                   
ora.drivers.acfs
      1        ONLINE  OFFLINE                                                   
ora.evmd
      1        ONLINE  OFFLINE                                                   
ora.gipcd
      1        ONLINE  OFFLINE                                                   
ora.gpnpd
      1        ONLINE  OFFLINE                                                   
ora.mdnsd
      1        ONLINE  OFFLINE                               STARTING            
[root@lunardb01 grid]# 

使用rootcrs.pl的init选项尝试修复,结果是不行的:

[root@lunardb01 lunardb01]# $GRID_HOME/crs/install/rootcrs.pl -init
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
[root@lunardb01 lunardb01]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
[root@lunardb01 lunardb01]# 
[root@lunardb01 ohasd]# ps -ef|grep d.bin
root     12642     1  0 19:48 ?        00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid     14804     1  0 19:51 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root     15481 19818  0 19:52 pts/1    00:00:00 grep d.bin
[root@lunardb01 ohasd]# ps -ef|grep d.bin
root     12642     1  0 19:48 ?        00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid     14804     1  0 19:51 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root     15663 19818  0 19:52 pts/1    00:00:00 grep d.bin
[root@lunardb01 ohasd]# 

后台日志的报错信息,跟上面的是雷同的。
可见,使用rootcrs.pl -init修复目录权限,在chown -R /u01面前,作用不大。

发表在 RAC | 标签为 , , | 留下评论

11.2 RAC 修改了目录权限(u01)后crs不能启动的解决方法–手工修复权限之总结

正如在11.2 RAC 上所有grid环境需要的文件的权限配置文件:crsconfig_fileperms
11.2 RAC 上所有grid环境需要的目录的权限配置文件:crsconfig_dirs中描述的,理论上,根据这两个文件,自己写一个shell脚本修改全部grid环境所需的权限,看上去是可以的。
也正如11.2 RAC 修改了目录权限(u01)后crs不能启动的解决方法-1-手工修复错误的权限中所证明的,其实真要是手工修改,只为了让crs可以起来,完全不必要那么麻烦,只要简单的几条命令即可,但是上面的3种手工修改权限的方法,都是oracle官方所不支持的,以前也有人log SR专门问过这类问题,官方给的推荐方法就是remove nodes and add nodes:

The permissions can be reverted back to original values with rootcrs.pl or roothas.pl . There is a option -init  :     
Reset the permissions of all files and directories under Oracle CRS/HA home.

for GRID:
rootcrs.pl -init

For Standalone Grig:
roothas.pl  -init

If that does not work then permissions can be altered manually with information found from crsconfig_fileperms and crsconfig_dirs files.


Please note that changing the permissions manually is last resort and shouldn't be done unless recommended by Oracle support or development.

这种解释是很好理解的,11.2 RAC相对10.2来说可以说是重新设计的,相对复杂很多的庞然大物,其附带的功能也非常多,因此,手工修改后,到底会有什么风险,稳定性如何保证都是问题……
因此,明天我用加减节点的方法来重现故障后,再减节点和加节点的方法修复试试看,O(∩_∩)O哈哈~

发表在 RAC | 标签为 , , | 留下评论

11.2 RAC 上所有grid环境需要的文件的权限配置文件:crsconfig_fileperms

在11.2的$GRID_HOME/crs/utl目录下有一个文件crsconfig_fileperms,记录了所有grid目录下各个文件的权限定义,例如:

[root@lunardb01 utl]# pwd
/u01/app/11.2.0/grid/crs/utl
[root@lunardb01 utl]# ls
appvipcfg     crfsetenv            crswrap.sh         evmlogger.conf  logging.properties  oclumon.pl  onsconfig       rootconfigadd      scrctl
clsrwrap      crsconfig_dirs       cvures             gsdctl          lsnodes             ohasd       onsctl          rootdeinstall.sh   setasmgidwrap
cluutil       crsconfig_fileperms  diagcollection.sh  gsd.sh          ndfnceca            ohasd.sles  preupdate.sh    rootdeletenode.sh  srdtool
cluvfy        crsconfig_files      evm.auth           init.ohasd      oc4jctl             ologdbg     qosctl          rootdelete.sh      srvctl
cmdllroot.sh  crswrapexece.pl      evmdaemon.conf     localconfig     oclumon             ologdbg.pl  rootaddnode.sh  rootinstalladd     usrvip
[root@lunardb01 utl]# cat crsconfig_fileperms
# Copyright (c) 2009, 2011, Oracle and/or its affiliates. All rights reserved. 
# The values in each line use the following format:
#
# OSLIST FILENAME OWNER GROUP PERMS
#
# Note:
# 1) OSLIST is a comma-separated list of platforms on which the file
#    permissions need to be set.  'all' indicates that the directory needs
#    to be created on every platform.  OSLIST MUST NOT contain whitespace.
# 2) Permissions need to be specified AS OCTAL NUMBERS.  If permissions
#    are not specified, default (umask) values will be used.
# 3) The fields within each line of this file must be delimited by a single space
#
unix /u01/app/11.2.0/grid/log/lunardb01/alertlunardb01.log grid oinstall 0664
unix /u01/app/11.2.0/grid/bin/usrvip root oinstall 0755
unix /u01/app/11.2.0/grid/bin/appvipcfg root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/preupdate.sh grid oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/s_crsconfig_defs grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cluutil grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrcheck root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrcheck.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrconfig root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrconfig.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrdump root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrdump.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocrpatch root oinstall 0755
unix /u01/app/11.2.0/grid/bin/appagent grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/clssproxy grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cssvfupgd root oinstall 0755
unix /u01/app/11.2.0/grid/bin/cssvfupgd.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/racgwrap grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cemutls grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cemutlo grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_getperm grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_profile grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_register grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_relocate grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_setperm grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_start grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_stat grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_stop grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crs_unregister grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/gipcd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/mdnsd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/gpnpd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/gpnptool grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oranetmonitor grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/rdtool grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/octssd root oinstall 0741
unix /u01/app/11.2.0/grid/bin/octssd.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/ohasd root oinstall 0741
unix /u01/app/11.2.0/grid/bin/ohasd.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/crsd root oinstall 0741
unix /u01/app/11.2.0/grid/bin/crsd.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/evmd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evminfo grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmlogger grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmmkbin grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmmklib grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmpost grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmshow grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmsort grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/evmwatch grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/lsnodes grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oifcfg grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/olsnodes grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oraagent grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/orarootagent root oinstall 0741
unix /u01/app/11.2.0/grid/bin/orarootagent.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/scriptagent grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/lsdb grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/emcrsp grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/onsctl grid oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/onsconfig grid oinstall 0554
unix /u01/app/11.2.0/grid/bin/gnsd root oinstall 0741
unix /u01/app/11.2.0/grid/bin/gnsd.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/gsd.sh grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/gsdctl grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/scrctl grid oinstall 0750
unix /u01/app/11.2.0/grid/bin/vipca grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oc4jctl grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cvures grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/odnsd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/qosctl grid oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/cmdllroot.sh grid oinstall 0755
unix /u01/app/11.2.0/grid/crs/utl/rootdelete.sh root root 0755
unix /u01/app/11.2.0/grid/crs/utl/rootdeletenode.sh root root 0755
unix /u01/app/11.2.0/grid/crs/utl/rootdeinstall.sh root root 0755
unix /u01/app/11.2.0/grid/crs/utl/rootaddnode.sh root root 0755
unix /u01/app/11.2.0/grid/lib/libskgxpcompat.so grid oinstall 0644

unix /u01/app/11.2.0/grid/bin/srvctl root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clsecho root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clsecho.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clscfg root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clscfg.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clsfmt root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clsfmt.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/clsid grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cluvfy grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/crsctl root oinstall 0755
unix /u01/app/11.2.0/grid/bin/crsctl.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/ndfnceca grid oinstall 0750
unix /u01/app/11.2.0/grid/bin/oclskd root oinstall 0755
unix /u01/app/11.2.0/grid/bin/oclskd.bin root oinstall 0751
unix /u01/app/11.2.0/grid/bin/oclsomon grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oclsvmon grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/ocssd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/cssdagent root oinstall 0741
unix /u01/app/11.2.0/grid/bin/cssdagent.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/cssdmonitor root oinstall 0741
unix /u01/app/11.2.0/grid/bin/cssdmonitor.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/diskmon root oinstall 0741
unix /u01/app/11.2.0/grid/bin/diskmon.bin root oinstall 0741
unix /u01/app/11.2.0/grid/bin/diagcollection.sh root oinstall 0755
unix /u01/app/11.2.0/grid/bin/oradnssd grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oradnssd.bin grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/setasmgidwrap grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/oclumon root oinstall 0750
unix /u01/app/11.2.0/grid/bin/oclumon.bin root oinstall 0750
unix /u01/app/11.2.0/grid/bin/oclumon.pl grid oinstall 0750
unix /u01/app/11.2.0/grid/bin/crswrapexece.pl root oinstall 0744
unix /u01/app/11.2.0/grid/bin/crfsetenv root oinstall 0750
unix /u01/app/11.2.0/grid/bin/osysmond root oinstall 0750
unix /u01/app/11.2.0/grid/bin/osysmond.bin root oinstall 0750
unix /u01/app/11.2.0/grid/bin/ologgerd root oinstall 0750
unix /u01/app/11.2.0/grid/bin/ologdbg grid oinstall 0750
unix /u01/app/11.2.0/grid/bin/ologdbg.pl grid oinstall 0750
unix /etc/oracle/setasmgid root oinstall 4710

# Jars and shared libraries used by the executables invoked by the root script

unix /u01/app/11.2.0/grid/jlib/srvm.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/srvmasm.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/srvctl.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/srvmhas.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/gns.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/ons.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/netcfg.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/i18n.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/supercluster.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/supercluster-common.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/antlr-complete.jar root oinstall 0644
unix /u01/app/11.2.0/grid/jlib/antlr-3.3-complete.jar root oinstall 0644

unix /u01/app/11.2.0/grid/lib/libhasgen11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libocr11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libocrb11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libocrutl11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libclntsh.so.11.1 root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libclntshcore.so.11.1 root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libskgxn2.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libskgxp11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libasmclntsh11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libcell11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libnnz11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libclsra11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libgns11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libeons.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libonsx.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libeonsserver.so root oinstall 0644

unix /u01/app/11.2.0/grid/lib/libsrvm11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libsrvmhas11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libsrvmocr11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libuini11.so root oinstall 0644

unix /u01/app/11.2.0/grid/lib/libgnsjni11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/librdjni11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libgnsjni11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libclsce11.so root oinstall 0644
unix /u01/app/11.2.0/grid/lib/libcrf11.so root oinstall 0644

unix /u01/app/11.2.0/grid/bin/diagcollection.pl root oinstall 0755

# crs configuration scripts invoked from rootcrs.pl
unix /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/s_crsconfig_lib.pm root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/crsdelete.pm root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/crspatch.pm root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/oracss.pm root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/oraacfs.pm root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/hasdconfig.pl root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/rootcrs.pl root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/roothas.pl root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/preupdate.sh root oinstall 0755
unix /u01/app/11.2.0/grid/crs/install/rootofs.sh root oinstall 0755


# XXX: required only for dev env, where inittab ($IT) is not present already
unix /etc/inittab root root 0644

# USM FILES
# Only files which will be installed with executable permissions need
# to be listed.
unix /u01/app/11.2.0/grid/bin/acfsdriverstate root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsload root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsregistrymount root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsroot root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfssinglefsmount root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsrepl_apply root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsrepl_apply.bin root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsreplcrs grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsreplcrs.pl grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsrepl_initializer root oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsrepl_monitor grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsrepl_preapply grid oinstall 0755
unix /u01/app/11.2.0/grid/bin/acfsrepl_transport grid oinstall 0755
unix /u01/app/11.2.0/grid/lib/acfsdriverstate.pl root oinstall 0644
unix /u01/app/11.2.0/grid/lib/acfsload.pl root oinstall 0644
unix /u01/app/11.2.0/grid/lib/acfsregistrymount.pl root oinstall 0644
unix /u01/app/11.2.0/grid/lib/acfsroot.pl root oinstall 0644
unix /u01/app/11.2.0/grid/lib/acfssinglefsmount.pl root oinstall 0644
unix /u01/app/11.2.0/grid/lib/acfstoolsdriver.sh root oinstall 0755
unix /u01/app/11.2.0/grid/lib/libusmacfs11.so grid oinstall 0644

#EVM config files
unix /u01/app/11.2.0/grid/evm/admin/conf/evm.auth root oinstall 0644
unix /u01/app/11.2.0/grid/evm/admin/conf/evmdaemon.conf root oinstall 0644
unix /u01/app/11.2.0/grid/evm/admin/conf/evmlogger.conf root oinstall 0644
unix /u01/app/11.2.0/grid/cdata/lunardb01.olr root oinstall 0600
unix /etc/oracle/olr.loc root oinstall 0644
unix /etc/oracle/ocr.loc root oinstall 0644
[root@lunardb01 utl]# 
发表在 RAC | 标签为 , , | 留下评论

11.2 RAC 上所有grid环境需要的目录的权限配置文件:crsconfig_dirs

在11.2的$GRID_HOME/crs/utl目录下有一个文件crsconfig_dirs,记录了所有grid目录下各个目录的权限定义,例如:

[root@lunardb01 utl]# cat crsconfig_dirs
# Copyright (c) 2009, 2011, Oracle and/or its affiliates. All rights reserved. 
# The values in each line use the following format:
#
# OSLIST DIRNAME OWNER GROUP CLOSED-PERMS OPEN-PERMS
#
# Note:
# 1) OSLIST is a comma-separated list of platforms on which the directory
#    needs to be created.  'all' indicates that the directory needs to be
#    created on every platform.  OSLIST MUST NOT contain whitespace.
# 2) Permissions need to be specified AS OCTAL NUMBERS.  If permissions are
#    not specified, default (umask) values will be used.
#
# TBD: OPEN-PERMS need to be added for each dir

all /u01/app/11.2.0/grid/cdata grid oinstall 0775
all /u01/app/11.2.0/grid/cdata/lunardb-cluster grid oinstall 0775
all /u01/app/11.2.0/grid/cfgtoollogs grid oinstall 0775
all /u01/app/11.2.0/grid/cfgtoollogs/crsconfig grid oinstall 0775
all /u01/app/11.2.0/grid/log grid oinstall 0775
all /u01/app/11.2.0/grid/log/lunardb01 root oinstall 01755
all /u01/app/11.2.0/grid/log/lunardb01/crsd root oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/ctssd root oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/evmd grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/cssd grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/mdnsd grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/gpnpd grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/gnsd root oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/srvm grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/gipcd grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/diskmon grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/cvu grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/cvu/cvulog grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/cvu/cvutrc grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/acfssec root oinstall 0755
all /u01/app/11.2.0/grid/log/lunardb01/acfsrepl grid oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/acfslog grid oinstall 0750
all /u01/app/11.2.0/grid/cdata/localhost grid oinstall 0755
all /u01/app/11.2.0/grid/cdata/lunardb01 grid oinstall 0755
all /u01/app/11.2.0/grid/cv grid oinstall 0775
all /u01/app/11.2.0/grid/cv/log grid oinstall 0775
all /u01/app/11.2.0/grid/cv/init grid oinstall 0775
all /u01/app/11.2.0/grid/cv/report grid oinstall 0775
all /u01/app/11.2.0/grid/cv/report/html grid oinstall 0775
all /u01/app/11.2.0/grid/cv/report/text grid oinstall 0775
all /u01/app/11.2.0/grid/cv/report/xml grid oinstall 0775

# These dirs must be owned by crsuser in SIHA, and $SUPERUSER in cluster env.
# 'HAS_USER' is set appropriately in roothas.pl and rootcrs.pl for this
# purpose
all /u01/app/11.2.0/grid/log/lunardb01/ohasd root oinstall 0750
all /u01/app/11.2.0/grid/lib root oinstall 0755
all /u01/app/11.2.0/grid/bin root oinstall 0755

all /u01/app/11.2.0/grid/log/lunardb01/agent root oinstall 01775
all /u01/app/11.2.0/grid/log/lunardb01/agent/crsd root oinstall 01777
all /u01/app/11.2.0/grid/log/lunardb01/agent/ohasd root oinstall 01775
all /u01/app/11.2.0/grid/log/lunardb01/client grid oinstall 0751
all /u01/app/11.2.0/grid/log/lunardb01/racg grid oinstall 01775
all /u01/app/11.2.0/grid/log/lunardb01/racg/racgmain grid oinstall 01777
all /u01/app/11.2.0/grid/log/lunardb01/racg/racgeut grid oinstall 01777
all /u01/app/11.2.0/grid/log/lunardb01/racg/racgevtf grid oinstall 01777
all /u01/app/11.2.0/grid/log/lunardb01/admin grid oinstall 0750
all /u01/app/11.2.0/grid/log/diag/clients grid asmadmin 01770
all /u01/app/11.2.0/grid/evm grid oinstall 0750
all /u01/app/11.2.0/grid/evm/init grid oinstall 0750
all /u01/app/11.2.0/grid/auth/evm/lunardb01 root oinstall 01777
all /u01/app/11.2.0/grid/evm/log grid oinstall 01770
all /u01/app/11.2.0/grid/eons/init grid oinstall 0750
all /u01/app/11.2.0/grid/auth/ohasd/lunardb01 root oinstall 01777
all /u01/app/11.2.0/grid/mdns grid oinstall 0750
all /u01/app/11.2.0/grid/mdns/init grid oinstall 0750
all /u01/app/11.2.0/grid/gipc grid oinstall 0750
all /u01/app/11.2.0/grid/gipc/init grid oinstall 0750
all /u01/app/11.2.0/grid/gns root oinstall 0750
all /u01/app/11.2.0/grid/gns/init root oinstall 0750
all /u01/app/11.2.0/grid/gpnp grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/init grid oinstall 0750
all /u01/app/11.2.0/grid/ohasd grid oinstall 0750
all /u01/app/11.2.0/grid/ohasd/init grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/profiles grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/profiles/peer grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/wallets grid oinstall 01750
all /u01/app/11.2.0/grid/gpnp/wallets/root grid oinstall 01700
all /u01/app/11.2.0/grid/gpnp/wallets/prdr grid oinstall 01750
all /u01/app/11.2.0/grid/gpnp/wallets/peer grid oinstall 01700
all /u01/app/11.2.0/grid/gpnp/wallets/pa grid oinstall 01700
all /u01/app/11.2.0/grid/mdns grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/lunardb01/profiles grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/lunardb01/profiles/peer grid oinstall 0750
all /u01/app/11.2.0/grid/gpnp/lunardb01/wallets grid oinstall 01750
all /u01/app/11.2.0/grid/gpnp/lunardb01/wallets/root grid oinstall 01700
all /u01/app/11.2.0/grid/gpnp/lunardb01/wallets/prdr grid oinstall 01750
all /u01/app/11.2.0/grid/gpnp/lunardb01/wallets/peer grid oinstall 01700
all /u01/app/11.2.0/grid/gpnp/lunardb01/wallets/pa grid oinstall 01700
all /u01/app/11.2.0/grid/css grid oinstall 0711
all /u01/app/11.2.0/grid/css/init grid oinstall 0711
all /u01/app/11.2.0/grid/css/log grid oinstall 0711
all /u01/app/11.2.0/grid/auth/css/lunardb01 root oinstall 01777
all /u01/app/11.2.0/grid/crs root oinstall 0755
all /u01/app/11.2.0/grid/crs/init root oinstall 0755
all /u01/app/11.2.0/grid/crs/profile root oinstall 0755
all /u01/app/11.2.0/grid/crs/script root oinstall 0755
all /u01/app/11.2.0/grid/crs/template root oinstall 0755
all /u01/app/11.2.0/grid/auth/crs/lunardb01 root oinstall 01777
all /u01/app/11.2.0/grid/crs/log grid oinstall 01750
all /u01/app/11.2.0/grid/crs/trace grid oinstall 01750
all /u01/app/11.2.0/grid/crs/public grid oinstall 01777
all /u01/app/11.2.0/grid/ctss root oinstall 0755
all /u01/app/11.2.0/grid/ctss/init root oinstall 0755
all /u01/app/11.2.0/grid/racg/usrco grid oinstall
all /u01/app/11.2.0/grid/racg/dump grid oinstall 0775
all /u01/app/11.2.0/grid/srvm/admin grid oinstall 0775
all /u01/app/11.2.0/grid/srvm/log grid oinstall 0775
all /u01/app/11.2.0/grid/evm/admin/conf grid oinstall 0750
all /u01/app/11.2.0/grid/evm/admin/logger grid oinstall 0750
all /u01/app/11.2.0/grid/crf root oinstall 0750
all /u01/app/11.2.0/grid/crf/admin root oinstall 0750
all /u01/app/11.2.0/grid/crf/admin/run grid oinstall 0750
all /u01/app/11.2.0/grid/crf/admin/run/crfmond root oinstall 0700
all /u01/app/11.2.0/grid/crf/admin/run/crflogd root oinstall 0700
all /u01/app/11.2.0/grid/crf/db root oinstall 0750
all /u01/app/11.2.0/grid/crf/db/lunardb01 root oinstall 0750
all /u01/app/11.2.0/grid/osysmond root oinstall 0755
all /u01/app/11.2.0/grid/osysmond/init root oinstall 0755
all /u01/app/11.2.0/grid/ologgerd root oinstall 0755
all /u01/app/11.2.0/grid/ologgerd/init root oinstall 0755
all /u01/app/11.2.0/grid/log/lunardb01/crfmond root oinstall 0750
all /u01/app/11.2.0/grid/log/lunardb01/crflogd root oinstall 0750

unix /etc/oracle/oprocd root oinstall 0775
unix /etc/oracle/oprocd/check root oinstall 0770
unix /etc/oracle/oprocd/stop root oinstall 0770
unix /etc/oracle/oprocd/fatal root oinstall 0770
unix /etc/oracle/scls_scr root oinstall 0755
unix /etc/oracle/scls_scr/lunardb01 root oinstall 0755
unix /var/tmp/.oracle root oinstall 01777
unix /tmp/.oracle root oinstall 01777
unix /u01/app/11.2.0/grid/log/lunardb01/acfsreplroot root oinstall 0750
# create $ID, if it doesn't exist (applicable only in dev env)
unix /etc/init.d root root 0755
unix /u01/app/11.2.0/grid root oinstall 0755

# Last Gasp files directory - change "unix" to "all"
# once Windows makes a directory decision.
unix /etc/oracle/lastgasp root oinstall 0770
[root@lunardb01 utl]#  
发表在 RAC | 标签为 , , | 留下评论

11.2 RAC 修改了目录权限(u01)后crs不能启动的解决方法-1-手工修复错误的权限

在11.2RAC中,如果修改了gird的安装目录(类似chown -R xxx /u01),比如通常我们会使用/u01,则crs会出现不能启动的状态,启动时,mdnsd进程会首先卡主,crs日志会有如下信息:

2014-09-26 20:56:25.895
[ohasd(16366)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.mdnsd'. Details at (:CRSPE00111:) {0:0:199} in /u01/app/11.2.0/grid/log/lunardb1/ohasd/ohasd.log.
2014-09-26 20:58:28.984
[/u01/app/11.2.0/grid/bin/oraagent.bin(15422)]CRS-5818:Aborted command 'start' for resource 'ora.mdnsd'. Details at (:CRSAGF00113:) {0:0:228} in /u01/app/11.2.0/grid/log/lunardb1/agent/ohasd/oraagent_grid//oraagent_grid.log.
2014-09-26 20:58:32.994
[ohasd(16366)]CRS-2757:Command 'Start' timed out waiting for response from the resource 'ora.mdnsd'. Details at (:CRSPE00111:) {0:0:228} in /u01/app/11.2.0/grid/log/lunardb1/ohasd/ohasd.log.
2014-09-26 23:36:05.848
[/u01/app/11.2.0/grid/bin/orarootagent.bin(8064)]CRS-5016:Process "/u01/app/11.2.0/grid/bin/acfsload" spawned by agent "/u01/app/11.2.0/grid/bin/orarootagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/lunardb1/agent/ohasd/orarootagent_root/orarootagent_root.log"

下面我们尝试使用3种方法来修复该问题。
方法1————直接修改/u01和其他相关文件或者目录的权限:
注意: 此方法,仅仅用于紧急启动数据库或者ASM的不得已的做法,在生产环境下,官方建议的做法是删除节点和添加节点(后面会在方法3中详细描述)。

首先修改/u01目录为grid:oinstall,并修改/u01/app/oracle为oracle:oinstall

chown -R grid:oinstall /u01
chown -R oracle:oinstall /u01/app/oracle

如果进行上述目录权限的修改,那么crs表面可以启动,但是可以看到后台重要的agent进程都是错误的状态:

2014-10-04 19:56:16.828
[/u01/app/11.2.0/grid/bin/oraagent.bin(19898)]CRS-5016:Process "/u01/app/11.2.0/grid/bin/lsnrctl" spawned by agent "/u01/app/11.2.0/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/q9db01/agent/crsd/oraagent_grid/oraagent_grid.log"
2014-10-04 19:56:16.832
[/u01/app/11.2.0/grid/bin/oraagent.bin(19898)]CRS-5016:Process "/u01/app/11.2.0/grid/bin/lsnrctl" spawned by agent "/u01/app/11.2.0/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/q9db01/agent/crsd/oraagent_grid/oraagent_grid.log"
2014-10-04 19:56:16.848
[/u01/app/11.2.0/grid/bin/oraagent.bin(19898)]CRS-5016:Process "/u01/app/11.2.0/grid/opmn/bin/onsctli" spawned by agent "/u01/app/11.2.0/grid/bin/oraagent.bin" for action "check" failed: details at "(:CLSN00010:)" in "/u01/app/11.2.0/grid/log/q9db01/agent/crsd/oraagent_grid/oraagent_grid.log"
2014-10-04 19:56:23.733
[/u01/app/11.2.0/grid/bin/oraagent.bin(20116)]CRS-5010:Update of configuration file "/u01/app/oracle/product/11.2.0/db_1/srvm/admin/oratab.bak.q9db01" failed: details at "(:CLSN00013:)" in "/u01/app/11.2.0/grid/log/q9db01/agent/crsd/oraagent_oracle//oraagent_oracle.log"

还有一些对ohasd和crsd比较关键的文件的权限,也一并修改了:

[grid@lunardb1 ~]$ env|grep ORA
ORACLE_SID=+ASM1
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/11.2.0/grid
[grid@lunardb1 ~]$ exit
logout
[root@lunardb1 app]# export GRID_HOME=/u01/app/11.2.0/grid
[root@lunardb1 app]# cd $GRID_HOME/log/`hostname`/crsd ; 
-bash: cd: /u01/app/11.2.0/grid/log/lunardb1.800best.com/crsd: No such file or directory
[root@lunardb1 app]# cd /u01/app/11.2.0/grid/log/lunardb1/crsd
[root@lunardb1 crsd]# chown root:root *
[root@lunardb1 crsd]# cd ../ohasd
[root@lunardb1 ohasd]# chown root:root *
[root@lunardb1 ohasd]# cd ..
[root@lunardb1 lunardb1]# ll
total 2312
drwxr-xr-x 2 grid oinstall    4096 Jun  7  2013 acfs
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 acfslog
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 acfsrepl
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 acfsreplroot
drwxr-xr-x 2 grid oinstall    4096 Jun  7  2013 acfssec
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 admin
drwxrwxr-t 4 grid oinstall    4096 Jun  7  2013 agent
-rw-rw-r-- 1 grid oinstall 2266872 Oct  4 17:48 alertlunardb1.log
drwxr-x--x 2 grid oinstall    4096 Jun 17 14:24 client
drwxr-x--- 2 grid oinstall    4096 Aug  6 15:40 crflogd
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 crfmond
drwxr-x--- 2 grid oinstall    4096 Sep  2 01:02 crsd
drwxr-x--- 2 grid oinstall    4096 Sep 27 01:54 cssd
drwxr-x--- 2 grid oinstall    4096 Sep 26 10:02 ctssd
drwxr-x--- 4 grid oinstall    4096 Jun  7  2013 cvu
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 diskmon
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 evmd
drwxr-x--- 2 grid oinstall    4096 Oct  4 17:47 gipcd
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 gnsd
drwxr-x--- 2 grid oinstall    4096 Oct  4 17:47 gpnpd
drwxr-x--- 2 grid oinstall    4096 Sep 27 01:30 mdnsd
drwxr-x--- 2 grid oinstall    4096 Sep 27 01:49 ohasd
drwxrwxr-t 5 grid oinstall    4096 Jun  7  2013 racg
drwxr-x--- 2 grid oinstall    4096 Jun  7  2013 srvm
[root@lunardb1 lunardb1]# cd /crsd
-bash: cd: /crsd: No such file or directory
[root@lunardb1 lunardb1]# cd agent/crsd/orarootagent_root 
[root@lunardb1 orarootagent_root]# chown root:root *
[root@lunardb1 orarootagent_root]# 
[root@lunardb1 orarootagent_root]# cd /u01/app/11.2.0/grid/log/lunardb1/agent/ohasd/orarootagent_root
[root@lunardb1 orarootagent_root]# ll
total 104528
-rw-r--r-- 1 grid oinstall 10564752 Sep 26 14:18 orarootagent_root.l01
-rw-r--r-- 1 grid oinstall 10565738 Sep 24 04:23 orarootagent_root.l02
-rw-r--r-- 1 grid oinstall 10563920 Sep 21 18:28 orarootagent_root.l03
-rw-r--r-- 1 grid oinstall 10565310 Sep 19 08:40 orarootagent_root.l04
-rw-r--r-- 1 grid oinstall 10565749 Sep 16 22:41 orarootagent_root.l05
-rw-r--r-- 1 grid oinstall 10563754 Sep 14 12:41 orarootagent_root.l06
-rw-r--r-- 1 grid oinstall 10563226 Sep 12 02:49 orarootagent_root.l07
-rw-r--r-- 1 grid oinstall 10561202 Sep  9 17:03 orarootagent_root.l08
-rw-r--r-- 1 grid oinstall 10543893 Sep  7 07:09 orarootagent_root.l09
-rw-r--r-- 1 grid oinstall 10566373 Sep  4 21:51 orarootagent_root.l10
-rw-r--r-- 1 grid oinstall  1213705 Oct  4 17:57 orarootagent_root.log
-rw-r--r-- 1 grid oinstall        0 Jun  7  2013 orarootagent_rootOUT.log
-rw-r--r-- 1 grid oinstall        5 Oct  4 17:48 orarootagent_root.pid
[root@lunardb1 orarootagent_root]# chown root:root *
[root@lunardb1 orarootagent_root]# 

[root@lunardb1 orarootagent_root]# cd $GRID_HOME
[root@lunardb1 grid]# cd bin
[root@lunardb1 bin]# ll oradism
-rwxr-x--- 1 grid oinstall 71758 Sep 17  2011 oradism
[root@lunardb1 bin]# 
[root@lunardb1 bin]# chown root:oinstall oradism
[root@lunardb1 bin]# chmod 4750 oradism
[root@lunardb1 bin]# ll oradism
-rwsr-x--- 1 root oinstall 71758 Sep 17  2011 oradism
[root@lunardb1 bin]# 

此时启动可以crs可以启动了。
但是,可以看到目录权限有问题的节点,数据库没有正常启动:

[root@lunardb1 app]# ps -ef|grep d.bin
root      3722     1  3 17:47 ?        00:00:02 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
grid      3938     1  0 17:47 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
grid      3950     1  0 17:47 ?        00:00:00 /u01/app/11.2.0/grid/bin/mdnsd.bin
grid      4003     1  0 17:47 ?        00:00:00 /u01/app/11.2.0/grid/bin/gpnpd.bin
grid      4024     1  1 17:47 ?        00:00:00 /u01/app/11.2.0/grid/bin/gipcd.bin
root      4071     1  0 17:47 ?        00:00:00 /u01/app/11.2.0/grid/bin/cssdmonitor
root      4086     1  0 17:47 ?        00:00:00 /u01/app/11.2.0/grid/bin/cssdagent
grid      4117     1  2 17:47 ?        00:00:01 /u01/app/11.2.0/grid/bin/ocssd.bin 
root      4508     1  1 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/orarootagent.bin
root      4531     1  0 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/octssd.bin reboot
grid      4571     1  0 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/evmd.bin
root      5322     1  5 17:48 ?        00:00:01 /u01/app/11.2.0/grid/bin/crsd.bin reboot
grid      5600  4571  0 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/evmlogger.bin -o /u01/app/11.2.0/grid/evm/log/evmlogger.info -l /u01/app/11.2.0/grid/evm/log/evmlogger.log
grid      5646     1  6 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/oraagent.bin
root      5650     1  3 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/orarootagent.bin
grid      5847     1  2 17:48 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER -inherit
grid      5864     1  0 17:49 ?        00:00:00 /u01/app/11.2.0/grid/bin/tnslsnr LISTENER_DG -inherit
root      5869   429  0 17:49 pts/1    00:00:00 grep d.bin
[root@lunardb1 app]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.DATA.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.DATA1.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.LISTENER_DG.lsnr
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.OCR_VOTE.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.REDODG.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.asm
               ONLINE  ONLINE       lunardb1                   Started             
               ONLINE  ONLINE       lunardb2                   Started             
ora.gsd
               OFFLINE OFFLINE      lunardb1                                       
               OFFLINE OFFLINE      lunardb2                                       
ora.net1.network
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.net2.network
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.ons
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.registry.acfs
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunardb2                                       
ora.cvu
      1        ONLINE  ONLINE       lunardb2                                       
ora.oc4j
      1        ONLINE  ONLINE       lunardb2                                       
ora.lunardb.db
      1        ONLINE  OFFLINE                                           
      2        ONLINE  ONLINE       lunardb2                   Open,Readonly       
ora.lunardb1-dg-vip.vip
      1        ONLINE  ONLINE       lunardb1                                       
ora.lunardb1.vip
      1        ONLINE  ONLINE       lunardb1                                       
ora.lunardb2-dg-vip.vip
      1        ONLINE  ONLINE       lunardb2                                       
ora.lunardb2.vip
      1        ONLINE  ONLINE       lunardb2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunardb2                                       
[root@lunardb1 app]#

手工启动数据库,报错信息如下:

[oracle@lunardb1 ~]$ ss

SQL*Plus: Release 11.2.0.3.0 Production on Sat Oct 4 17:52:09 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

17:52:09 @>startup mount
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/lunardb/spfilelunardb.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA/lunardb/spfilelunardb.ora
ORA-12547: TNS:lost contact
17:52:15 @>exit
Disconnected
[oracle@lunardb1 ~]$ 

这个错误通常意味着oracle二进制文件权限不对,尝试修改:

[oracle@lunardb1 trace]$ ss

SQL*Plus: Release 11.2.0.3.0 Production on Sat Oct 4 18:01:53 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

18:01:53 @>startup   
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '+DATA/lunardb/spfilelunardb.ora'
ORA-17503: ksfdopn:2 Failed to open file +DATA/lunardb/spfilelunardb.ora
ORA-12547: TNS:lost contact
18:01:58 @>

正常情况下,$GRID_HOME/bin/oracle和$ORACLE_HOME/bin/oracle的权限都应该是6751,即“-rwsr-s–x”
对比下节点2(正常节点):

[grid@lunardb2 ~]$ cd $ORACLE_HOME
[grid@lunardb2 grid]$ cd bin
[grid@lunardb2 bin]$ ll oracle
-rwsr-s--x 1 grid oinstall 204113496 Jun  7  2013 oracle
[grid@lunardb2 bin]$ 

再看看节点1(问题节点):

[root@lunardb1 bin]# ll oracle
-rwxr-x--x 1 grid oinstall 204113496 Jun  7  2013 oracle
[root@lunardb1 bin]# 

手工修改$GRID_HOME/bin/oracle文件权限:

[root@lunardb1 bin]# chmod 6751 oracle
[root@lunardb1 bin]# ll oracle
-rwsr-s--x 1 grid oinstall 204113496 Jun  7  2013 oracle
[root@lunardb1 bin]# 

顺便检查一下$ORACLE_HOME/bin/oracle文件权限:

[root@lunardb1 bin]# su - oracle
[oracle@lunardb1 ~]$ cd $ORACLE_HOME
[oracle@lunardb1 db_1]$ cd bin
[oracle@lunardb1 bin]$ ll oracle
-rwxr-s--x 1 oracle asmadmin 221332085 Jun  7  2013 oracle
[oracle@lunardb1 bin]$ 

现在,再重新启动数据库:

[oracle@lunardb1 ~]$ ss

SQL*Plus: Release 11.2.0.3.0 Production on Sat Oct 4 18:06:27 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

18:06:27 @>startup
ORACLE instance started.

Total System Global Area 1.6034E+11 bytes
Fixed Size                  2236968 bytes
Variable Size            3.0602E+10 bytes
Database Buffers         1.2939E+11 bytes
Redo Buffers              352468992 bytes
Database mounted.
Database opened.
18:07:19 @>exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, Data Mining
and Real Application Testing options
[oracle@lunardb1 ~]$ exit
logout
You have new mail in /var/spool/mail/root
[root@lunardb1 bin]# crsctl status res -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ARCH.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.DATA.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.DATA1.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.LISTENER.lsnr
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.LISTENER_DG.lsnr
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.OCR_VOTE.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.REDODG.dg
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.asm
               ONLINE  ONLINE       lunardb1                   Started             
               ONLINE  ONLINE       lunardb2                   Started             
ora.gsd
               OFFLINE OFFLINE      lunardb1                                       
               OFFLINE OFFLINE      lunardb2                                       
ora.net1.network
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.net2.network
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.ons
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
ora.registry.acfs
               ONLINE  ONLINE       lunardb1                                       
               ONLINE  ONLINE       lunardb2                                       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       lunardb2                                       
ora.cvu
      1        ONLINE  ONLINE       lunardb2                                       
ora.oc4j
      1        ONLINE  ONLINE       lunardb2                                       
ora.lunardb.db
      1        ONLINE  ONLINE       lunardb1                   Open,Readonly       
      2        ONLINE  ONLINE       lunardb2                   Open,Readonly       
ora.lunardb1-dg-vip.vip
      1        ONLINE  ONLINE       lunardb1                                       
ora.lunardb1.vip
      1        ONLINE  ONLINE       lunardb1                                       
ora.lunardb2-dg-vip.vip
      1        ONLINE  ONLINE       lunardb2                                       
ora.lunardb2.vip
      1        ONLINE  ONLINE       lunardb2                                       
ora.scan1.vip
      1        ONLINE  ONLINE       lunardb2                                       
[root@lunardb1 bin]# 

目前该数据库貌似可以启动了,如果在很多异常情况下,目前的情况,已经可以尝试导出数据库或者备份数据库等等了。
但是这种状态的crs和数据库是存在很大隐患的,比如很可能会异常宕机或者出现其他莫名其妙的损坏等情况。
因此,一旦权限出现问题,要么使用rootcrs.pl -init修复(通常这种情况下,这种修复是徒劳的,后面的测试会证明这一点)
否则官方不支持任何手工手工修改权限的做法。就这一点,官方有明确的:

 The permissions can be reverted back to original values with rootcrs.pl or roothas.pl . There is a option -init  :     
Reset the permissions of all files and directories under Oracle CRS/HA home.

for GRID:
rootcrs.pl -init

For Standalone Grig:
roothas.pl  -init

If that does not work then permissions can be altered manually with information found from crsconfig_fileperms and crsconfig_dirs files.

Please note that changing the permissions manually is last resort and shouldn't be done unless re
发表在 RAC | 标签为 , , | 留下评论

升级到11.2.0.4的一些发现-2-其他发现

升级到11.2.0.4的一些发现-1-catupgrd.sql大致解读
升级到11.2.0.4的一些发现-3-catalog.sql的主要内容

1,如果当前连接的用户不是SYS,那么会报ORA-01722: invalid number错误:

SQL> conn / as sysdba
Connected.
SQL> show user
USER is "SYS"
SQL> SELECT TO_NUMBER('MUST_BE_AS_SYSDBA') FROM DUAL
  2  WHERE USER != 'SYS';

no rows selected

SQL> conn lunar/lunar
Connected.
SQL> show user
USER is "LUNAR"
SQL> SELECT TO_NUMBER('MUST_BE_AS_SYSDBA') FROM DUAL
  2  WHERE USER != 'SYS';
SELECT TO_NUMBER('MUST_BE_AS_SYSDBA') FROM DUAL
                 *
ERROR at line 1:
ORA-01722: invalid number


SQL> 

那么判断是否当前连接用户为LUNAR,就可以使用下面的语句:

SQL> conn lunar/lunar
Connected.
SQL> show user
USER is "LUNAR"
SQL> SELECT TO_NUMBER('MUST_BE_AS_LUNAR') FROM DUAL
  2  WHERE USER != 'LUNAR';

no rows selected

SQL> 
SQL>

同样道理,判断当前数据库版本是否为11.2.0.4:

SQL> SELECT TO_NUMBER('MUST_BE_11_2_0_3') FROM v$instance
  2  WHERE substr(version,1,8) != '11.2.0.4';

no rows selected

SQL>
2,利用11.2 的新特性,记录SQLPLUS错误日志:
CREATE TABLE sys.registry$error(username   VARCHAR(256),
                                timestamp  TIMESTAMP,
                                script     VARCHAR(1024),
                                identifier VARCHAR(256),
                                message    CLOB,
                                statement  CLOB);
                                         
DELETE FROM sys.registry$error;

set errorlogging on table sys.registry$error identifier 'RDBMS';

commit;

然后,通过下面的命令查看sqlplus的错误日志:

col timestamp format a15
col username format a15
col script format a10
col identifier format a15
col statement format a20
col message format a20
 select * from REGISTRY$ERROR;
 
SQL> CREATE TABLE LUNAR.registry$error(username   VARCHAR(256),
  2                                  timestamp  TIMESTAMP,
  3                                  script     VARCHAR(1024),
  4                                  identifier VARCHAR(256),
  5                                  message    CLOB,
  6                                  statement  CLOB);

Table created.

SQL> DELETE FROM LUNAR.registry$error;

0 rows deleted.

SQL> set errorlogging on table LUNAR.registry$error identifier 'RDBMS';
SQL> COMMIT;

Commit complete.

SQL> conn lunar/lunar
Connected.
SQL> select * from REGISTRY$ERROR;

no rows selected

SQL> insert into REGISTRY$ERROR as select * from dba_users;
insert into REGISTRY$ERROR as select * from dba_users
                           *
ERROR at line 1:
ORA-00926: missing VALUES keyword


SQL> select count(*) from REGISTRY$ERROR;

  COUNT(*)
----------
         0

SQL>

这里我们看到,并没有记录下来sqlplus的操作错误,仔细看一下,原来set errorlogging on table命令必须在当前用户下执行,例如:

SQL> set errorlogging on table LUNAR.registry$error ;
SQL> insert into REGISTRY$ERROR as select * from dba_users;
insert into REGISTRY$ERROR as select * from dba_users
                           *
ERROR at line 1:
ORA-00926: missing VALUES keyword


SQL> set linesize 167
SQL> set pages 999
SQL> col timestamp format a15
SQL> col username format a15
SQL> col script format a10
SQL> col identifier format a15
SQL> col statement format a20
SQL> col message format a20
SQL>  select * from REGISTRY$ERROR;

USERNAME        TIMESTAMP       SCRIPT     IDENTIFIER      MESSAGE              STATEMENT
--------------- --------------- ---------- --------------- -------------------- --------------------
LUNAR           03-AUG-14 05.45                            ORA-00926: missing V insert into REGISTRY
                .13.000000 PM                              ALUES keyword        $ERROR as select * f
                                                                                rom dba_users

LUNAR           03-AUG-14 05.50                            SP2-0042: unknown co g
                .25.000000 PM                              mmand "g" - rest of
                                                           line ignored.


SQL>

看,错误信息,一目了然
3, auto-bulkification by setting event 10933

Bug:6275368 PL/SQL FOR UPDATE cursor may be positioned on wrong row
              Component: RDBMS
              Fixed Ver(s): 10205 11107 112
               Symptom(s):
                - If a FOR LOOP iterates over a cursor declared in a different package, auto-bulkification occurs. This
                  may be inappropriate if the cursor's SQL statement (which would appear in the package body) contains 
                  a FOR UPDATE clause as the "CURRENT OF" may then be incorrect.
               Available Workaround(s): 
               					Manually turn off auto-bulkification by setting event 10933, level 16384 
               					and recompiling affected library units.

4,catupgrd.sql会调用catupstr.sql, 这个脚本执行过程中中,还需要依次调用:
catupses.sql
i0902000.sql——重整 props$,dependency$,mon_mods$。之后,该脚本还调用i1001000.sql。i1001000调用i1002000.sql。
在i1002000.sql有有一个有意思的操作:

Rem clear 0×00200000 (read-only table flag) in trigflag during upgrade
update tab$ set trigflag = trigflag – 2097152
where bitand(trigflag, 2097152) <> 0;
commit;
— 0×00200000转换成10进制是2097152
—-bitand,顾名思义,就是按位与操作,即:

SQL> select bitand(1,0) from dual;

BITAND(1,0)
-----------
          0

1 row selected.

SQL> select bitand(0,1) from dual ;

BITAND(0,1)
-----------
          0

1 row selected.

SQL> select bitand(1,1) from dual ;

BITAND(1,1)
-----------
          1

1 row selected.

SQL> 

SQL> select bitand(trigflag, 2097152) ,trigflag,count(*) from tab$ group by bitand(trigflag, 2097152) ,trigflag;

BITAND(TRIGFLAG,2097152)   TRIGFLAG   COUNT(*)
------------------------ ---------- ----------
                       0  201326592         18
                       0          0       1957

2 rows selected.

SQL> 
SQL> alter session set nls_date_format='yyyy-mm-dd hh24:mi:ss';

Session altered.

select object_id,SUBOBJECT_NAME,object_name,CREATED,LAST_DDL_TIME,STATUS from dba_objects
where object_id in(select obj# from tab$ where TRIGFLAG=201326592) order by object_id ;

SQL> select object_id,SUBOBJECT_NAME,object_name,CREATED,LAST_DDL_TIME,STATUS from 
发表在 Installation and Deinstall | 标签为 | 留下评论

往使用了17T的DG中添加2个1.5T的LUN,共耗时3小时

当前使用了28块800G的SAS SSD。

昨晚,往使用了17T的DG中添加2个1.5T的LUN,从开始到ASM rebalance完成,总共耗时3小时(以前测试的Exadata 1/4配置,加载纯LOB对象,也是1小时1T,当时也感觉很震撼……),速度很快啊,记录一下,O(∩_∩)O哈哈~:

[grid@lunardb1 ~]$ asmcmd
ASMCMD> lsdg
State    Type    Rebal  Sector  Block       AU  Total_MB  Free_MB  Req_mir_free_MB  Usable_file_MB  Offline_disks  Voting_files  Name
MOUNTED  EXTERN  N         512   4096  1048576   3072390  2803853                0         2803853              0             N  ARCHDG/
MOUNTED  EXTERN  Y         512   4096  1048576  23040000  4766148                0         4766148              0             N  DATA/
MOUNTED  EXTERN  N         512   4096  1048576      3072     2676                0            2676              0             Y  OCR_VOTE/
MOUNTED  EXTERN  N         512   4096  1048576    614400   614183                0          614183              0             N  REDODG/
ASMCMD> exit

Thu Sep 25 18:42:33 2014
SQL> ALTER DISKGROUP DATA ADD  DISK '/dev/lunardatalun14' SIZE 1536000M ,
'/dev/lunardatalun15' SIZE 1536000M /* ASMCA */ 
NOTE: GroupBlock outside rolling migration privileged region
NOTE: Assigning number (2,13) to disk (/dev/lunardatalun14)
NOTE: Assigning number (2,14) to disk (/dev/lunardatalun15)
NOTE: requesting all-instance membership refresh for group=2
NOTE: initializing header on grp 2 disk DATA_0013
NOTE: initializing header on grp 2 disk DATA_0014
NOTE: requesting all-instance disk validation for group=2
Thu Sep 25 18:42:37 2014
NOTE: skipping rediscovery for group 2/0x5ffb2459 (DATA) on local instance.
NOTE: requesting all-instance disk validation for group=2
NOTE: skipping rediscovery for group 2/0x5ffb2459 (DATA) on local instance.
NOTE: initiating PST update: grp = 2
Thu Sep 25 18:42:41 2014
GMON updating group 2 at 17 for pid 27, osid 97661
NOTE: PST update grp = 2 completed successfully 
NOTE: membership refresh pending for group 2/0x5ffb2459 (DATA)
GMON querying group 2 at 18 for pid 18, osid 85811
NOTE: cache opening disk 13 of grp 2: DATA_0013 path:/dev/lunardatalun14
NOTE: cache opening disk 14 of grp 2: DATA_0014 path:/dev/lunardatalun15
NOTE: Attempting voting file refresh on diskgroup DATA
GMON querying group 2 at 19 for pid 18, osid 85811
SUCCESS: refreshed membership for 2/0x5ffb2459 (DATA)
Thu Sep 25 18:42:45 2014
SUCCESS: ALTER DISKGROUP DATA ADD  DISK '/dev/lunardatalun14' SIZE 1536000M ,
'/dev/lunardatalun15' SIZE 1536000M /* ASMCA */
NOTE: starting rebalance of group 2/0x5ffb2459 (DATA) at power 1
Starting background process ARB0
Thu Sep 25 18:42:45 2014
ARB0 started with pid=40, OS id=97872 
NOTE: assigning ARB0 to group 2/0x5ffb2459 (DATA) with 1 parallel I/O
Thu Sep 25 18:42:48 2014
NOTE: Attempting voting file refresh on diskgroup DATA
Thu Sep 25 19:15:00 2014
SQL> alter diskgroup data rebalance power 11 
NOTE: GroupBlock outside rolling migration privileged region
Thu Sep 25 19:15:00 2014
NOTE: stopping process ARB0
NOTE: rebalance interrupted for group 2/0x5ffb2459 (DATA)
NOTE: requesting all-instance membership refresh for group=2
NOTE: membership refresh pending for group 2/0x5ffb2459 (DATA)
Thu Sep 25 19:15:07 2014
GMON querying group 2 at 20 for pid 18, osid 85811
SUCCESS: refreshed membership for 2/0x5ffb2459 (DATA)
SUCCESS: alter diskgroup data rebalance power 11
NOTE: starting rebalance of group 2/0x5ffb2459 (DATA) at power 11
Starting background process ARB0
Thu Sep 25 19:15:07 2014
ARB0 started with pid=28, OS id=111465 
NOTE: assigning ARB0 to group 2/0x5ffb2459 (DATA) with 11 parallel I/Os
Thu Sep 25 19:15:15 2014
NOTE: Attempting voting file refresh on diskgroup DATA
Thu Sep 25 20:30:47 2014
NOTE: GroupBlock outside rolling migration privileged region
NOTE: requesting all-instance membership refresh for group=2
Thu Sep 25 20:30:50 2014
NOTE: membership refresh pending for group 2/0x5ffb2459 (DATA)
Thu Sep 25 20:30:53 2014
GMON querying group 2 at 21 for pid 18, osid 85811
SUCCESS: refreshed membership for 2/0x5ffb2459 (DATA)
NOTE: Attempting voting file refresh on diskgroup DATA
Thu Sep 25 21:45:28 2014
NOTE: stopping process ARB0
SUCCESS: rebalance completed for group 2/0x5ffb2459 (DATA)

发表在 ASM | 标签为 , | 一条评论

使用ASM的数据库和使用文件系统的数据库在AIO上哪里不同?

昨天客户的一个重要应用切换到新的系统环境上,今天观察,发现部分异常等待:


1


从OS的CPU负载来看,定期会出现一个峰值,从ASH中可以看出,这个峰值对应的等待事件跟AWR的完全吻合。
因此,主要怀疑两个东西:
1,应用的SQL和对象的属性(比如table或者index的统计信息,并行度等等……)
2,系统的AIO设置


上面的第一条,已经提交给开发相应的SQL和其他信息
第二条,因为系统以前是11.2 RAC,使用了ASM,而现在是单机文件系统.


因此对比了这两种环境下AIO的异同,结论如下
1,Linux下,ASM数据库和文件系统数据库的AIO设置差别:
(1). ASM的AIO属性是不受 FILESYSTEMIO_OPTIONS 参数的影响(因为ASM会绕过文件系统buffer),只跟DISK_ASYNCH_IO有关系
(2). 文件系统的AIO属性跟 FILESYSTEMIO_OPTIONS 和 DISK_ASYNCH_IO 都有关系

2,FILESYSTEMIO_OPTIONS=NONE : Bug 6733627 – Unaccounted Wait Time on “Direct Path” operations with FILESYSTEM_IO_OPTIONS=NONE (Doc ID 6733627.8)

3, db file async I/O submit’相关内容:
‘db file async I/O submit’ when FILESYSTEMIO_OPTIONS=NONE (Doc ID 1274737.1) —详细讲述了 ‘db file async I/O submit’ 和 FILESYSTEMIO_OPTIONS=NONE的关系
当文件系统设置了FILESYSTEMIO_OPTIONS=NONE时,会出现“db file async I/O submit”后台等待事件,而正常应该是出现“db file parallel write”
可以通过设置 FILESYSTEMIO_OPTIONS=SETALL,启用AIO,然后在AWR中会出现db file parallel write,而不再是db file async I/O submit

4,FILESYSTEMIO_OPTIONS=DIRECTIO : Wrong FILESYSTEMIO_OPTIONS Settings Can Cause a Corrupted Block to be Returned at the First Read (Doc ID 1918825.1)

5,一般,linux建议设置为 FILESYSTEMIO_OPTIONS=SETALL


具体测试过程如下:

先看看官方文档的描述:

FILESYSTEMIO_OPTIONS
--------------------------------
Property	Description
Parameter type	String
Syntax	FILESYSTEMIO_OPTIONS = { none | setall | directIO | asynch }
Default value	Varies by database version and operating system.
Modifiable	No
Basic	No

DISK_ASYNCH_IO
------------------------------
Property	Description
Parameter type	Boolean
Default value	true
Modifiable	No
Range of values	true | false
Basic	No

此处是文件系统:filesystemio_options=none disk_asynch_io=true(缺省值),使用strace发现,没有启用AIO:

15:21:06 SYS@ Lunar> show parameter filesystemio_options

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      none
15:21:16 SYS@ Lunar> show parameter DISK_ASYNCH_IO

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
disk_asynch_io                       boolean     TRUE
15:21:30 SYS@ Lunar> 

在系统上看,目前并没有启用AIO:

[oracle@Lunar tmp]$ cat /proc/slabinfo | grep kio
kioctx               140    384    320   12    1 : tunables   54   27    8 : slabdata     31     32     22
kiocb                  0      0    256   15    1 : tunables  120   60    8 : slabdata      0      0      0
[oracle@Lunar tmp]$ 

使用strace对后台进程DBWR进行跟踪,佐证了上述观点,当前文件系统的数据库没有启用AIO:

[oracle@Lunar tmp]$ ps -ef|grep dbw
oracle   16540 30146  0 14:50 pts/3    00:00:00 grep dbw
oracle   20618     1  0 Sep24 ?        00:02:39 ora_dbw0_Lunar
oracle   20620     1  0 Sep24 ?        00:02:55 ora_dbw1_Lunar
oracle   20622     1  0 Sep24 ?        00:02:47 ora_dbw2_Lunar
oracle   20624     1  0 Sep24 ?        00:02:29 ora_dbw3_Lunar
oracle   20626     1  0 Sep24 ?        00:02:48 ora_dbw4_Lunar
oracle   20628     1  0 Sep24 ?        00:02:41 ora_dbw5_Lunar
oracle   20630     1  0 Sep24 ?        00:02:44 ora_dbw6_Lunar
oracle   20632     1  0 Sep24 ?        00:02:55 ora_dbw7_Lunar
oracle   20634     1  0 Sep24 ?        00:02:06 ora_dbw8_Lunar
oracle   20636     1  0 Sep24 ?        00:01:46 ora_dbw9_Lunar
oracle   20638     1  0 Sep24 ?        00:01:56 ora_dbwa_Lunar
oracle   20640     1  0 Sep24 ?        00:01:58 ora_dbwb_Lunar
oracle   20642     1  0 Sep24 ?        00:01:52 ora_dbwc_Lunar
oracle   20644     1  0 Sep24 ?        00:01:57 ora_dbwd_Lunar
oracle   20646     1  0 Sep24 ?        00:01:50 ora_dbwe_Lunar
oracle   20648     1  0 Sep24 ?        00:01:50 ora_dbwf_Lunar
[oracle@Lunar tmp]$ 

[oracle@Lunar ~]$ tail -f /tmp/20620.log
20620      0.000030 pwrite(264, "\6\242\0\0\330a\350\20\241\363;\201\241\5\1\6\331\242\0\0\2\0\f\0M.\1\0s\363;\201"..., 8192, 21680029696) = 8192
20620      0.000088 times({tms_utime=7086, tms_stime=10442, tms_cutime=0, tms_cstime=0}) = 454220443
20620      0.000032 pwrite(281, "\6\242\0\0\311O+\25\2475=\201\241\5\1\0061\223\0\0\2\0\34\0S.\1\0\326-=\201"..., 8192, 23252770816) = 8192
20620      0.000090 times({tms_utime=7086, tms_stime=10442, tms_cutime=0, tms_cstime=0}) = 454220443
20620      0.000031 pwrite(282, "\6\242\0\0\206\201&\27\270\364;\201\241\5\1\6{\300\0\0\2\0\6\0M.\1\0\243\364;\201"..., 8192, 20672724992) = 8192
20620      0.000115 times({tms_utime=7086, tms_stime=10442, tms_cutime=0, tms_cstime=0}) = 454220443
20620      0.000038 pwrite(286, "\6\242\0\0v#\5\24\363J<\201\241\5\1\6J\315\0\0\2\0\26\0M.\1\0\tP\306\200"..., 24576, 2758721536) = 24576
20620      0.000116 times({tms_utime=7086, tms_stime=10442, tms_cutime=0, tms_cstime=0}) = 454220443
20620      0.000026 times({tms_utime=7086, tms_stime=10442, tms_cutime=0, tms_cstime=0}) = 454220443
20620      0.000022 semtimedop(557058, 0x7fffdef50660, 1, {2, 990000000}

strace中,没有io_submi函数,因为 filesystemio_options = none
如果 filesystemio_options = setall ,那么会出现 io_submi函数

下面的测试是使用ASM的数据库的参数:

15:24:25 SYS@ Lunardb1> show parameter FILESYSTEMIO_OPTIONS

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      none
15:24:29 SYS@ Lunardb1> show parameter DISK_ASYNCH_IO

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
disk_asynch_io                       boolean     TRUE
15:24:39 SYS@ Lunardb1> 

在系统上看,虽然设置了FILESYSTEMIO_OPTIONS=NONE,但是由于DISK_ASYNCH_IO=TRUE(缺省值),因此,ASM下的数据库依然使用AIO:

[root@Lunardb1 ~]# cat /proc/slabinfo | grep kio
kioctx               238    340    384   10    1 : tunables   54   27    8 : slabdata     34     34      0
kiocb               3656   4845    256   15    1 : tunables  120   60    8 : slabdata    323    323    180
[root@Lunardb1 ~]# 

[oracle@Lunardb1 ~]$ ps -ef|grep dbw
oracle    82860  82820  0 15:25 pts/1    00:00:00 grep dbw
grid      85795      1  0 Sep25 ?        00:00:36 asm_dbw0_+ASM1
grid      86406      1  0 Sep25 ?        00:01:32 /u01/app/11.2.0/grid/jdk/jre//bin/java -server -Xcheck:jni -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -Dstdstream.filesize=100 -Dstdstream.filenumber=10 -DTRACING.ENABLED=false -Doracle.wlm.dbwlmlogger.logging.level=INFO -Dport.rmi=23792 -jar /u01/app/11.2.0/grid/oc4j/j2ee/home/oc4j.jar -config /u01/app/11.2.0/grid/oc4j/j2ee/home/OC4J_DBWLM_config/server.xml -out /u01/app/11.2.0/grid/oc4j/j2ee/home/log/oc4j.out -err /u01/app/11.2.0/grid/oc4j/j2ee/home/log/oc4j.err
oracle    87014      1  0 Sep25 ?        00:10:02 ora_dbw0_Lunardb1
oracle    87018      1  0 Sep25 ?        00:11:22 ora_dbw1_Lunardb1
oracle    87022      1  0 Sep25 ?        00:09:28 ora_dbw2_Lunardb1
oracle    87026      1  0 Sep25 ?        00:08:50 ora_dbw3_Lunardb1
oracle    87030      1  0 Sep25 ?        00:09:43 ora_dbw4_Lunardb1
oracle    87041      1  0 Sep25 ?        00:09:47 ora_dbw5_Lunardb1
oracle    87048      1  0 Sep25 ?        00:08:52 ora_dbw6_Lunardb1
oracle    87052      1  0 Sep25 ?        00:08:59 ora_dbw7_Lunardb1
oracle    87056      1  0 Sep25 ?        00:08:26 ora_dbw8_Lunardb1
oracle    87060      1  0 Sep25 ?        00:12:26 ora_dbw9_Lunardb1
oracle    87064      1  0 Sep25 ?        00:09:35 ora_dbwa_Lunardb1
oracle    87068      1  0 Sep25 ?        00:09:25 ora_dbwb_Lunardb1
oracle    87072      1  0 Sep25 ?        00:08:35 ora_dbwc_Lunardb1
oracle    87076      1  0 Sep25 ?        00:09:22 ora_dbwd_Lunardb1
oracle    87080      1  0 Sep25 ?        00:08:16 ora_dbwe_Lunardb1
[oracle@Lunardb1 ~]$

可以看到,使用ASM数据库的dbw0进程,即使FILESYSTEMIO_OPTIONS设置为NONE,只要disk_asynch_io设置为true(缺省值),DBWR也可以使用到AIO:

[root@Lunardb1 ~]# cat /tmp/87014.log|grep io_submit|wc -l
82
[root@Lunardb1 ~]# 

[oracle@Lunardb1 ~]$ tail -f  /tmp/87014.log
87014      0.000056 times({tms_utime=45414, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051459
87014      0.000089 times({tms_utime=45414, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051459
87014      0.000054 times({tms_utime=45414, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051459
87014      0.001498 io_submit(140281973911552, 96, {{0x7f95f0a56d18, 0, 1, 0, 260}, {0x7f95f0e5cf50, 0, 1, 0, 262}, {0x7f95f0d97080, 0, 1, 0, 264}, {0x7f95f0a5a7b0, 0, 1, 0, 258}, {0x7f95f09703a0, 0, 1, 0, 256}, {0x7f95f0a5f2b0, 0, 1, 0, 266}, {0x7f95f0a42e58, 0, 1, 0, 259}, {0x7f95f0982d48, 0, 1, 0, 260}, {0x7f95f0e6a240, 0, 1, 0, 266}, {0x7f95f09734d8, 0, 1, 0, 267}, {0x7f95f071b2a8, 0, 1, 0, 269}, {0x7f95f0a5d438, 0, 1, 0, 259}, {0x7f95f0719430, 0, 1, 0, 256}, {0x7f95f0747420, 0, 1, 0, 269}, {0x7f95f0effa40, 0, 1, 0, 266}, {0x7f95f0d86550, 0, 1, 0, 256}, {0x7f95f071cec8, 0, 1, 0, 256}, {0x7f95f0a43560, 0, 1, 0, 256}, {0x7f95f0e780e8, 0, 1, 0, 265}, {0x7f95f074b5c0, 0, 1, 0, 262}, {0x7f95f095a668, 0, 1, 0, 260}, {0x7f95f0957e90, 0, 1, 0, 266}, {0x7f95f094adf8, 0, 1, 0, 258}, {0x7f95f0a8a070, 0, 1, 0, 258}, {0x7f95f097bf20, 0, 1, 0, 258}, {0x7f95f0e5c398, 0, 1, 0, 265}, {0x7f95f0aa2c88, 0, 1, 0, 263}, {0x7f95f0743730, 0, 1, 0, 269}, {0x7f95f0740850, 0, 1, 0, 260}, {0x7f95f0939968, 0, 1, 0, 264}, {0x7f95f0e5c5f0, 0, 1, 0, 264}, {0x7f95f0a4ec30, 0, 1, 0, 264}, {0x7f95f0d7ae80, 0, 1, 0, 256}, {0x7f95f0959ab0, 0, 1, 0, 256}, {0x7f95f0f01fc0, 0, 1, 0, 260}, {0x7f95f095c4e0, 0, 1, 0, 271}, {0x7f95f0aa3cf0, 0, 1, 0, 264}, {0x7f95f0d9c288, 0, 1, 0, 264}, {0x7f95f07494f0, 0, 1, 0, 258}, {0x7f95f0edef98, 0, 1, 0, 268}, {0x7f95f0a2f8f8, 0, 1, 0, 256}, {0x7f95f0709e18, 0, 1, 0, 269}, {0x7f95f0d7caa0, 0, 1, 0, 261}, {0x7f95f0e830b0, 0, 1, 0, 256}, {0x7f95f0a967a8, 0, 1, 0, 270}, {0x7f95f0ede890, 0, 1, 0, 262}, {0x7f95f0e7ab18, 0, 1, 0, 259}, {0x7f95f0a202e0, 0, 1, 0, 266}, {0x7f95f09778d0, 0, 1, 0, 266}, {0x7f95f0d781f8, 0, 1, 0, 266}, {0x7f95f074c3d0, 0, 1, 0, 269}, {0x7f95f0740f58, 0, 1, 0, 269}, {0x7f95f0dc0570, 0, 1, 0, 265}, {0x7f95f097a558, 0, 1, 0, 265}, {0x7f95f07204b0, 0, 1, 0, 265}, {0x7f95f0973be0, 0, 1, 0, 270}, {0x7f95f0a53988, 0, 1, 0, 270}, {0x7f95f0e934d8, 0, 1, 0, 270}, {0x7f95f0745350, 0, 1, 0, 266}, {0x7f95f0a56ac0, 0, 1, 0, 263}, {0x7f95f0964118, 0, 1, 0, 263}, {0x7f95f0727c38, 0, 1, 0, 263}, {0x7f95f096ace8, 0, 1, 0, 260}, {0x7f95f0d7f980, 0, 1, 0, 260}, {0x7f95f0f05f08, 0, 1, 0, 258}, {0x7f95f0d88f80, 0, 1, 0, 261}, {0x7f95f0a262f8, 0, 1, 0, 265}, {0x7f95f0a209e8, 0, 1, 0, 263}, {0x7f95f0ea2898, 0, 1, 0, 266}, {0x7f95f0720000, 0, 1, 0, 264}, {0x7f95f0d9c4e0, 0, 1, 0, 264}, {0x7f95f0e84cd0, 0, 1, 0, 262}, {0x7f95f0f10a20, 0, 1, 0, 268}, {0x7f95f095e808, 0, 1, 0, 260}, {0x7f95f0d809e8, 0, 1, 0, 258}, {0x7f95f0e69430, 0, 1, 0, 261}, {0x7f95f0efaf40, 0, 1, 0, 261}, {0x7f95f0941f00, 0, 1, 0, 261}, {0x7f95f0ea0c78, 0, 1, 0, 267}, {0x7f95f0a983c8, 0, 1, 0, 269}, {0x7f95f0a2be60, 0, 1, 0, 258}, {0x7f95f0a615d8, 0, 1, 0, 259}, {0x7f95f0ef68f0, 0, 1, 0, 259}, {0x7f95f0742470, 0, 1, 0, 268}, {0x7f95f0e64228, 0, 1, 0, 260}, {0x7f95f0dc07c8, 0, 1, 0, 260}, {0x7f95f0a48e70, 0, 1, 0, 260}, {0x7f95f0daace8, 0, 1, 0, 259}, {0x7f95f09498e0, 0, 1, 0, 259}, {0x7f95f0715740, 0, 1, 0, 267}, {0x7f95f0f071c8, 0, 1, 0, 271}, {0x7f95f0aac030, 0, 1, 0, 265}, {0x7f95f0e62ab8, 0, 1, 0, 261}, {0x7f95f093ba38, 0, 1, 0, 260}, {0x7f95f0723cf0, 0, 1, 0, 260}, {0x7f95f096de20, 0, 1, 0, 258}}) = 96
87014      0.007424 io_getevents(140281973911552, 7, 128, {{0x7f95f0a56d18, 0x7f95f0a56d18, 8192, 0}, {0x7f95f0e5cf50, 0x7f95f0e5cf50, 8192, 0}, {0x7f95f0d97080, 0x7f95f0d97080, 8192, 0}, {0x7f95f0a5a7b0, 0x7f95f0a5a7b0, 8192, 0}, {0x7f95f09703a0, 0x7f95f09703a0, 8192, 0}, {0x7f95f0a5f2b0, 0x7f95f0a5f2b0, 8192, 0}, {0x7f95f0a42e58, 0x7f95f0a42e58, 8192, 0}, {0x7f95f0982d48, 0x7f95f0982d48, 8192, 0}, {0x7f95f0e6a240, 0x7f95f0e6a240, 8192, 0}, {0x7f95f09734d8, 0x7f95f09734d8, 8192, 0}, {0x7f95f071b2a8, 0x7f95f071b2a8, 8192, 0}, {0x7f95f0a5d438, 0x7f95f0a5d438, 8192, 0}, {0x7f95f0719430, 0x7f95f0719430, 8192, 0}, {0x7f95f0747420, 0x7f95f0747420, 8192, 0}, {0x7f95f0effa40, 0x7f95f0effa40, 8192, 0}, {0x7f95f0d86550, 0x7f95f0d86550, 8192, 0}, {0x7f95f071cec8, 0x7f95f071cec8, 8192, 0}, {0x7f95f0a43560, 0x7f95f0a43560, 8192, 0}, {0x7f95f0e780e8, 0x7f95f0e780e8, 8192, 0}, {0x7f95f074b5c0, 0x7f95f074b5c0, 8192, 0}, {0x7f95f095a668, 0x7f95f095a668, 8192, 0}, {0x7f95f0957e90, 0x7f95f0957e90, 8192, 0}, {0x7f95f094adf8, 0x7f95f094adf8, 8192, 0}, {0x7f95f0a8a070, 0x7f95f0a8a070, 8192, 0}, {0x7f95f097bf20, 0x7f95f097bf20, 8192, 0}, {0x7f95f0e5c398, 0x7f95f0e5c398, 8192, 0}, {0x7f95f0aa2c88, 0x7f95f0aa2c88, 8192, 0}, {0x7f95f0743730, 0x7f95f0743730, 8192, 0}, {0x7f95f0740850, 0x7f95f0740850, 8192, 0}, {0x7f95f0939968, 0x7f95f0939968, 8192, 0}, {0x7f95f0e5c5f0, 0x7f95f0e5c5f0, 8192, 0}, {0x7f95f0a4ec30, 0x7f95f0a4ec30, 8192, 0}, {0x7f95f0d7ae80, 0x7f95f0d7ae80, 8192, 0}, {0x7f95f0959ab0, 0x7f95f0959ab0, 8192, 0}, {0x7f95f0f01fc0, 0x7f95f0f01fc0, 8192, 0}, {0x7f95f095c4e0, 0x7f95f095c4e0, 8192, 0}, {0x7f95f0aa3cf0, 0x7f95f0aa3cf0, 8192, 0}, {0x7f95f0d9c288, 0x7f95f0d9c288, 8192, 0}, {0x7f95f07494f0, 0x7f95f07494f0, 8192, 0}, {0x7f95f0a2f8f8, 0x7f95f0a2f8f8, 8192, 0}, {0x7f95f0edef98, 0x7f95f0edef98, 8192, 0}, {0x7f95f0709e18, 0x7f95f0709e18, 8192, 0}, {0x7f95f0d7caa0, 0x7f95f0d7caa0, 8192, 0}, {0x7f95f0e830b0, 0x7f95f0e830b0, 8192, 0}, {0x7f95f0a967a8, 0x7f95f0a967a8, 8192, 0}, {0x7f95f0ede890, 0x7f95f0ede890, 8192, 0}, {0x7f95f0e7ab18, 0x7f95f0e7ab18, 8192, 0}, {0x7f95f0d781f8, 0x7f95f0d781f8, 8192, 0}, {0x7f95f09778d0, 0x7f95f09778d0, 32768, 0}, {0x7f95f0a202e0, 0x7f95f0a202e0, 65536, 0}, {0x7f95f074c3d0, 0x7f95f074c3d0, 8192, 0}, {0x7f95f0740f58, 0x7f95f0740f58, 8192, 0}, {0x7f95f0dc0570, 0x7f95f0dc0570, 16384, 0}, {0x7f95f097a558, 0x7f95f097a558, 24576, 0}, {0x7f95f07204b0, 0x7f95f07204b0, 8192, 0}, {0x7f95f0973be0, 0x7f95f0973be0, 8192, 0}, {0x7f95f0a53988, 0x7f95f0a53988, 8192, 0}, {0x7f95f0e934d8, 0x7f95f0e934d8, 8192, 0}, {0x7f95f0745350, 0x7f95f0745350, 8192, 0}, {0x7f95f0a56ac0, 0x7f95f0a56ac0, 8192, 0}, {0x7f95f0964118, 0x7f95f0964118, 8192, 0}, {0x7f95f0727c38, 0x7f95f0727c38, 8192, 0}, {0x7f95f096ace8, 0x7f95f096ace8, 8192, 0}, {0x7f95f0d7f980, 0x7f95f0d7f980, 8192, 0}, {0x7f95f0f05f08, 0x7f95f0f05f08, 8192, 0}, {0x7f95f0d88f80, 0x7f95f0d88f80, 8192, 0}, {0x7f95f0a262f8, 0x7f95f0a262f8, 8192, 0}, {0x7f95f0a209e8, 0x7f95f0a209e8, 8192, 0}, {0x7f95f0ea2898, 0x7f95f0ea2898, 8192, 0}, {0x7f95f0720000, 0x7f95f0720000, 8192, 0}, {0x7f95f0d9c4e0, 0x7f95f0d9c4e0, 8192, 0}, {0x7f95f0e84cd0, 0x7f95f0e84cd0, 8192, 0}, {0x7f95f0f10a20, 0x7f95f0f10a20, 8192, 0}, {0x7f95f095e808, 0x7f95f095e808, 8192, 0}, {0x7f95f0d809e8, 0x7f95f0d809e8, 8192, 0}, {0x7f95f0e69430, 0x7f95f0e69430, 8192, 0}, {0x7f95f0efaf40, 0x7f95f0efaf40, 8192, 0}, {0x7f95f0ea0c78, 0x7f95f0ea0c78, 8192, 0}, {0x7f95f0941f00, 0x7f95f0941f00, 8192, 0}, {0x7f95f0a983c8, 0x7f95f0a983c8, 8192, 0}, {0x7f95f0a2be60, 0x7f95f0a2be60, 8192, 0}, {0x7f95f0a615d8, 0x7f95f0a615d8, 8192, 0}, {0x7f95f0ef68f0, 0x7f95f0ef68f0, 8192, 0}, {0x7f95f0742470, 0x7f95f0742470, 8192, 0}, {0x7f95f0e64228, 0x7f95f0e64228, 8192, 0}, {0x7f95f0dc07c8, 0x7f95f0dc07c8, 24576, 0}, {0x7f95f0a48e70, 0x7f95f0a48e70, 8192, 0}, {0x7f95f0daace8, 0x7f95f0daace8, 8192, 0}, {0x7f95f09498e0, 0x7f95f09498e0, 8192, 0}, {0x7f95f0715740, 0x7f95f0715740, 16384, 0}, {0x7f95f0f071c8, 0x7f95f0f071c8, 8192, 0}, {0x7f95f0aac030, 0x7f95f0aac030, 8192, 0}, {0x7f95f0e62ab8, 0x7f95f0e62ab8, 8192, 0}, {0x7f95f093ba38, 0x7f95f093ba38, 8192, 0}, {0x7f95f0723cf0, 0x7f95f0723cf0, 8192, 0}, {0x7f95f096de20, 0x7f95f096de20, 8192, 0}}, {600, 0}) = 96
87014      0.000321 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051459
87014      0.000459 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051460
87014      0.000060 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051460
87014      0.000036 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051460
87014      0.000029 semtimedop(26935363, {{25, -1, 0}}, 1, {2, 990000000}) = -1 EAGAIN (Resource temporarily unavailable)
87014      2.990020 getrusage(RUSAGE_SELF, {ru_utime={454, 151000}, ru_stime={151, 178000}, ...}) = 0
87014      0.000104 getrusage(RUSAGE_SELF, {ru_utime={454, 151000}, ru_stime={151, 178000}, ...}) = 0
87014      0.000076 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051759
87014      0.000092 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051759
87014      0.000057 times({tms_utime=45415, tms_stime=15117, tms_cutime=0, tms_cstime=0}) = 439051759
87014      0.001685 io_submit(140281973911552, 91, {{0x7f95f096de20, 0, 1, 0, 266}, {0x7f95f0723cf0, 0, 1, 0, 262}, {0x7f95f093ba38, 0, 1, 0, 262}, {0x7f95f0e62ab8, 0, 1, 0, 267}, {0x7f95f0aac030, 0, 1, 0, 256}, {0x7f95f0f071c8, 0, 1, 0, 263}, {0x7f95f0715740, 0, 1, 0, 266}, {0x7f95f09498e0, 0, 1, 0, 267}, {0x7f95f0daace8, 0, 1, 0, 269}, {0x7f95f0a48e70, 0, 1, 0, 258}, {0x7f95f0dc07c8, 0, 1, 0, 269}, {0x7f95f0e64228, 0, 1, 0, 268}, {0x7f95f0742470, 0, 1, 0, 256}, {0x7f95f0ef68f0, 0, 1, 0, 271}, {0x7f95f0a615d8, 0, 1, 0, 271}, {0x7f95f0a2be60, 0, 1, 0, 267}, {0x7f95f0a983c8, 0, 1, 0, 267}, {0x7f95f0941f00, 0, 1, 0, 268}, {0x7f95f0ea0c78, 0, 1, 0, 266}, {0x7f95f0efaf40, 0, 1, 0, 261}, {0x7f95f0e69430, 0, 1, 0, 264}, {0x7f95f0d809e8, 0, 1, 0, 267}, {0x7f95f095e808, 0, 1, 0, 270}, {0x7f95f0f10a20, 0, 1, 0, 258}, {0x7f95f0e84cd0, 0, 1, 0, 269}, {0x7f95f0d9c4e0, 0, 1, 0, 260}, {0x7f95f0720000, 0, 1, 0, 270}, {0x7f95f0ea2898, 0, 1, 0, 256}, {0x7f95f0a209e8, 0, 1, 0, 258}, {0x7f95f0a262f8, 0, 1, 0, 268}, {0x7f95f0d88f80, 0, 1, 0, 256}, {0x7f95f0f05f08, 0, 1, 0, 263}, {0x7f95f0d7f980, 0, 1, 0, 263}, {0x7f95f096ace8, 0, 1, 0, 258}, {0x7f95f0727c38, 0, 1, 0, 259}, {0x7f95f0964118, 0, 1, 0, 259}, {0x7f95f0a56ac0, 0, 1, 0, 259}, {0x7f95f0745350, 0, 1, 0, 258}, {0x7f95f0e934d8, 0, 1, 0, 271}, {0x7f95f0a53988, 0, 1, 0, 269}, {0x7f95f0973be0, 0, 1, 0, 269}, {0x7f95f07204b0, 0, 1, 0, 265}, {0x7f95f097a558, 0, 1, 0, 265}, {0x7f95f0dc0570, 0, 1, 0, 265}, {0x7f95f0740f58, 0, 1, 0, 270}, {0x7f95f074c3d0, 0, 1, 0, 270}, {0x7f95f0a202e0, 0, 1, 0, 260}, {0x7f95f09778d0, 0, 1, 0, 266}, {0x7f95f0d781f8, 0, 1, 0, 262}, {0x7f95f0e7ab18, 0, 1, 0, 267}, {0x7f95f0ede890, 0, 1, 0, 270}, {0x7f95f0a967a8, 0, 1, 0, 269}, {0x7f95f0e830b0, 0, 1, 0, 262}, {0x7f95f0d7caa0, 0, 1, 0, 267}, {0x7f95f0709e18, 0, 1, 0, 267}, {0x7f95f0edef98, 0, 1, 0, 260}, {0x7f95f0a2f8f8, 0, 1, 0, 267}, {0x7f95f07494f0, 0, 1, 0, 264}, {0x7f95f0d9c288, 0, 1, 0, 260}, {0x7f95f0aa3cf0, 0, 1, 0, 270}, {0x7f95f095c4e0, 0, 1, 0, 260}, {0x7f95f0f01fc0, 0, 1, 0, 260}, {0x7f95f0959ab0, 0, 1, 0, 258}, {0x7f95f0d7ae80, 0, 1, 0, 267}, {0x7f95f0a4ec30, 0, 1, 0, 266}, {0x7f95f0e5c5f0, 0, 1, 0, 258}, {0x7f95f0939968, 0, 1, 0, 258}, {0x7f95f0740850, 0, 1, 0, 258}, {0x7f95f0743730, 0, 1, 0, 258}, {0x7f95f0aa2c88, 0, 1, 0, 260}, {0x7f95f0e5c398, 0, 1, 0, 260}, {0x7f95f097bf20, 0, 1, 0, 259}, {0x7f95f0a8a070, 0, 1, 0, 259}, {0x7f95f094adf8, 0, 1, 0, 259}, {0x7f95f0957e90, 0, 1, 0, 259}, {0x7f95f095a668, 0, 1, 0, 259}, {0x7f95f074b5c0, 0, 1, 0, 259}, {0x7f95f0e780e8, 0, 1, 0, 259}, {0x7f95f0a43560, 0, 1, 0, 259}, {0x7f95f071cec8, 0, 1, 0, 267}, {0x7f95f0d86550, 0, 1, 0, 267}, {0x7f95f0effa40, 0, 1, 0, 264}, {0x7f95f0747420, 0, 1, 0, 261}, {0x7f95f0719430, 0, 1, 0, 269}, {0x7f95f0a5d438, 0, 1, 0, 261}, {0x7f95f071b2a8, 0, 1, 0, 264}, {0x7f95f09734d8, 0, 1, 0, 264}, {0x7f95f0e6a240, 0, 1, 0, 260}, {0x7f95f0982d48, 0, 1, 0, 260}, {0x7f95f0a42e58, 0, 1, 0, 260}, {0x7f95f0a5f2b0, 0, 1, 0, 261}}) = 91
87014      0.005871 io_getevents(140281973911552, 14, 128, {{0x7f95f096de20, 0x7f95f096de20, 8192, 0}, {0x7f95f0723cf0, 0x7f95f0723cf0, 8192, 0}, {0x7f95f093ba38, 0x7f95f093ba38, 8192, 0}, {0x7f95f0e62ab8, 0x7f95f0e62ab8, 8192, 0}, {0x7f95f0aac030, 0x7f95f0aac030, 8192, 0}, {0x7f95f0f071c8, 0x7f95f0f071c8, 8192, 0}, {0x7f95f0715740, 0x7f95f0715740, 8192, 0}, {0x7f95f09498e0, 0x7f95f09498e0, 8192, 0}, {0x7f95f0daace8, 0x7f95f0daace8, 8192, 0}, {0x7f95f0a48e70, 0x7f95f0a48e70, 8192, 0}, {0x7f95f0dc07c8, 0x7f95f0dc07c8, 8192, 0}, {0x7f95f0e64228, 0x7f95f0e64228, 8192, 0}, {0x7f95f0742470, 0x7f95f0742470, 8192, 0}, {0x7f95f0ef68f0, 0x7f95f0ef68f0, 8192, 0}, {0x7f95f0a615d8, 0x7f95f0a615d8, 8192, 0}, {0x7f95f0a2be60, 0x7f95f0a2be60, 8192, 0}, {0x7f95f0a983c8, 0x7f95f0a983c8, 8192, 0}, {0x7f95f0941f00, 0x7f95f0941f00, 8192, 0}, {0x7f95f0ea0c78, 0x7f95f0ea0c78, 8192, 0}, {0x7f95f0efaf40, 0x7f95f0efaf40, 8192, 0}, {0x7f95f0e69430, 0x7f95f0e69430, 8192, 0}, {0x7f95f0d809e8, 0x7f95f0d809e8, 8192, 0}, {0x7f95f095e808, 0x7f95f095e808, 8192, 0}, {0x7f95f0f10a20, 0x7f95f0f10a20, 8192, 0}, {0x7f95f0e84cd0, 0x7f95f0e84cd0, 8192, 0}, {0x7f95f0d9c4e0, 0x7f95f0d9c4e0, 8192, 0}, {0x7f95f0720000, 0x7f95f0720000, 8192, 0}, {0x7f95f0ea2898, 0x7f95f0ea2898, 8192, 0}, {0x7f95f0a209e8, 0x7f95f0a209e8, 8192, 0}, {0x7f95f0a262f8, 0x7f95f0a262f8, 8192, 0}, {0x7f95f0d88f80, 0x7f95f0d88f80, 8192, 0}, {0x7f95f0f05f08, 0x7f95f0f05f08, 8192, 0}, {0x7f95f0d7f980, 0x7f95f0d7f980, 8192, 0}, {0x7f95f096ace8, 0x7f95f096ace8, 8192, 0}, {0x7f95f0727c38, 0x7f95f0727c38, 8192, 0}, {0x7f95f0964118, 0x7f95f0964118, 8192, 0}, {0x7f95f0a56ac0, 0x7f95f0a56ac0, 8192, 0}, {0x7f95f0745350, 0x7f95f0745350, 8192, 0}, {0x7f95f0e934d8, 0x7f95f0e934d8, 8192, 0}, {0x7f95f0973be0, 0x7f95f0973be0, 8192, 0}, {0x7f95f0a53988, 0x7f95f0a53988, 49152, 0}, {0x7f95f07204b0, 0x7f95f07204b0, 32768, 0}, {0x7f95f0dc0570, 0x7f95f0dc0570, 8192, 0}, {0x7f95f097a558, 0x7f95f097a558, 24576, 0}, {0x7f95f0740f58, 0x7f95f0740f58, 8192, 0}, {0x7f95f0a202e0, 0x7f95f0a202e0, 8192, 0}, {0x7f95f074c3d0, 0x7f95f074c3d0, 24576, 0}, {0x7f95f09778d0, 0x7f95f09778d0, 8192, 0}, {0x7f95f0d781f8, 0x7f95f0d781f8, 8192, 0}, {0x7f95f0e7ab18, 0x7f95f0e7ab18, 8192, 0}, {0x7f95f0ede890, 0x7f95f0ede890, 8192, 0}, {0x7f95f0a967a8, 0x7f95f0a967a8, 8192, 0}, {0x7f95f0e830b0, 0x7f95f0e830b0, 8192, 0}, {0x7f95f0d7caa0, 0x7f95f0d7caa0, 8192, 0}, {0x7f95f0709e18, 0x7f95f0709e18, 8192, 0}, {0x7f95f0edef98, 0x7f95f0edef98, 8192, 0}, {0x7f95f0a2f8f8, 0x7f95f0a2f8f8, 8192, 0}, {0x7f95f07494f0, 0x7f95f07494f0, 8192, 0}, {0x7f95f0d9c288, 0x7f95f0d9c288, 8192, 0}, {0x7f95f0aa3cf0, 0x7f95f0aa3cf0, 8192, 0}, {0x7f95f095c4e0, 0x7f95f095c4e0, 8192, 0}, {0x7f95f0f01fc0, 0x7f95f0f01fc0, 8192, 0}, {0x7f95f0959ab0, 0x7f95f0959ab0, 8192, 0}, {0x7f95f0d7ae80, 0x7f95f0d7ae80, 8192, 0}, {0x7f95f0a4ec30, 0x7f95f0a4ec30, 8192, 0}, {0x7f95f0e5c5f0, 0x7f95f0e5c5f0, 8192, 0}, {0x7f95f0939968, 0x7f95f0939968, 8192, 0}, {0x7f95f0740850, 0x7f95f0740850, 8192, 0}, {0x7f95f0743730, 0x7f95f0743730, 8192, 0}, {0x7f95f0e5c398, 0x7f95f0e5c398, 16384, 0}, {0x7f95f0aa2c88, 0x7f95f0aa2c88, 32768, 0}, {0x7f95f097bf20, 0x7f95f097bf20, 16384, 0}, {0x7f95f0a8a070, 0x7f95f0a8a070, 8192, 0}, {0x7f95f094adf8, 0x7f95f094adf8, 24576, 0}, {0x7f95f0957e90, 0x7f95f0957e90, 8192, 0}, {0x7f95f074b5c0, 0x7f95f074b5c0, 8192, 0}, {0x7f95f095a668, 0x7f95f095a668, 16384, 0}, {0x7f95f0e780e8, 0x7f95f0e780e8, 8192, 0}, {0x7f95f0a43560, 0x7f95f0a43560, 16384, 0}, {0x7f95f071cec8, 0x7f95f071cec8, 8192, 0}, {0x7f95f0d86550, 0x7f95f0d86550, 8192, 0}, {0x7f95f0effa40, 0x7f95f0effa40, 8192, 0}, {0x7f95f0747420, 0x7f95f0747420, 8192, 0}, {0x7f95f0719430, 0x7f95f0719430, 8192, 0}, {0x7f95f0a5d438, 0x7f95f0a5d438, 8192, 0}, {0x7f95f071b2a8, 0x7f95f071b2a8, 8192, 0}, {0x7f95f09734d8, 0x7f95f09734d8, 8192, 0}, {0x7f95f0e6a240, 0x7f95f0e6a240, 8192, 0}, {0x7f95f0982d48, 0x7f95f0982d48, 8192, 0}, {0x7f95f0a42e58, 0x7f95f0a42e58, 8192, 0}, {0x7f95f0a5f2b0, 0x7f95f0a5f2b0, 8192, 0}}, {600, 0}) = 91
87014      0.000308 times({tms_utime=45415, tms_stime=15118, tms_cutime=0, tms_cstime=0}) = 439051759
87014      0.000356 times({tms_utime=45415, tms_stime=15118, tms_cutime=0, tms_cstime=0}) = 439051759
87014      0.000103 times({tms_utime=45415, tms_stime=15118, tms_cutime=0, tms_cstime=0}) = 439051759
87014      0.000160 times({tms_utime=45415, tms_stime=15118, tms_cutime=0, tms_cstime=0}) = 439051759

再看ASM实例的dbw进程,也是用了AIO:

[root@Lunardb1 ~]#  strace -fr -o /tmp/asm-dbw0-85795.log -p 85795
Process 85795 attached - interrupt to quit
^CProcess 85795 detached
[root@Lunardb1 ~]# 

[root@Lunardb1 ~]# cat /tmp/asm-dbw0-85795.log|grep io_submit
85795      0.000088 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
85795      0.000043 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
85795      0.000039 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
85795      0.000062 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 270}}) = 1
85795      0.000038 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
85795      0.000060 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
85795      0.000048 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 270}}) = 1
85795      0.000145 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
85795      0.000053 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 270}}) = 1
85795      0.000095 io_submit(140349435969536, 2, {{0x7fa5a63589b8, 0, 1, 0, 270}, {0x7fa5a6357248, 0, 1, 0, 257}}) = 2
85795      0.000042 io_submit(140349435969536, 1, {{0x7fa5a63589b8, 0, 1, 0, 257}}) = 1
[root@Lunardb1 ~]# 

发表在 ASM, FAQ | 标签为 , , , , | 留下评论