Exadata上验证ASM磁盘头备份的位置

我们都知道,从Oracle 10.2.0.5开始,ASM磁盘会自动备份磁盘头块,备份块的位置在第2个AU的倒数第2个块上。
通常,缺省情况下,AU SIZE是1M,每个AU可以有256个block(每个block 4k),256*4k()=1M
因此,第2个AU同样存放了256个block(0~255),其备份块的位置为 au=1 blkn=254(au和block都是从0开始计数的)
具体的恢复案例,可以参考:
ASM磁盘头被fdisk损坏的修复过程
那么在exadata上,缺省的AU是4M,也就是每个AU可以存储 1024个block,即 block 0 ~ block 1023
那么第二个AU的倒数第二个block就是 au=1 blkn=1022 ( au和block都是从0开始计数的 )

我们来检测一下是不是这样:

col Attribute for a30
col Value for a30
SELECT g.name "Group name", a.name "Attribute", a.value "Value"
FROM v$asm_diskgroup g, v$asm_attribute a
WHERE g.group_number=a.group_number and a.name not like 'template%';
Group name                     Attribute                      Value
------------------------------ ------------------------------ ------------------------------
DATA_DM01                      au_size                        4194304   -------//  AU SIZE
DATA_DM01                      disk_repair_time               3.6h
DATA_DM01                      idp.boundary                   auto
DATA_DM01                      idp.type                       dynamic
DATA_DM01                      sector_size                    512       -------//  SECTOR SIZE
DATA_DM01                      access_control.umask           066
DATA_DM01                      compatible.rdbms               11.2.0.2.0
DATA_DM01                      compatible.advm                11.2.0.0.0
DATA_DM01                      cell.smart_scan_capable        TRUE
DATA_DM01                      access_control.enabled         FALSE
DATA_DM01                      compatible.asm                 11.2.0.2.0


[grid@dm01db01 ~]$ kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m aun=0 blkn=0|grep ausize
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000					-------//  AU SIZE, 4M
[grid@dm01db01 ~]$ kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m aun=0 blkn=0|grep blksize
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000					    -------//  ASM BLOCK SIZ, 4k
[grid@dm01db01 ~]$ kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m aun=0 blkn=0|grep secsize
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200             -------//  SECTOR SIZE, 512 byte
[grid@dm01db01 ~]$ 


注意:
这里kfed命令中,需要指定ausz=4m,否则kfed缺省按照1M来计算,那么结果就是错误的了:

[grid@dm01db01 ~]$ kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 aun=1 blkn=1022
kfbh.endian:                          1 ; 0x000: 0x01
kfbh.hard:                          130 ; 0x001: 0x82
kfbh.type:                           13 ; 0x002: KFBTYP_PST_NONE
kfbh.datfmt:                          1 ; 0x003: 0x01
kfbh.block.blk:              2147483902 ; 0x004: T=1 NUMB=0xfe
kfbh.block.obj:              2147483648 ; 0x008: TYPE=0x8 NUMB=0x0
kfbh.check:                    17662719 ; 0x00c: 0x010d82ff
kfbh.fcn.base:                        0 ; 0x010: 0x00000000
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000
kfbh.spare1:                          0 ; 0x018: 0x00000000
kfbh.spare2:                          0 ; 0x01c: 0x00000000
ERROR!!!, failed to get the oracore error message
[grid@dm01db01 ~]$ 

在kfed中指定ausz=4m,检测结果:

[grid@dm01db01 ~]$ kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m aun=0 blkn=0     [grid@dm01db01 ~]$ kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m aun=1 blkn=1022
kfbh.endian:                          1 ; 0x000: 0x01                                          kfbh.endian:                          1 ; 0x000: 0x01                                        
kfbh.hard:                          130 ; 0x001: 0x82                                          kfbh.hard:                          130 ; 0x001: 0x82                                        
kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD                               kfbh.type:                            1 ; 0x002: KFBTYP_DISKHEAD                             
kfbh.datfmt:                          1 ; 0x003: 0x01                                          kfbh.datfmt:                          1 ; 0x003: 0x01                                        
kfbh.block.blk:                       0 ; 0x004: T=0 NUMB=0x0                                  kfbh.block.blk:                    1022 ; 0x004: T=0 NUMB=0x3fe                    ---这里不同          
kfbh.block.obj:              2147483648 ; 0x008: TYPE=0x8 NUMB=0x0                             kfbh.block.obj:              2147483648 ; 0x008: TYPE=0x8 NUMB=0x0                          
kfbh.check:                   650486353 ; 0x00c: 0x26c5a251                                    kfbh.check:                   650486191 ; 0x00c: 0x26c5a1af                        ---这里不同                              
kfbh.fcn.base:                  4985684 ; 0x010: 0x004c1354                                    kfbh.fcn.base:                  4985684 ; 0x010: 0x004c1354                                  
kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000                                    kfbh.fcn.wrap:                        0 ; 0x014: 0x00000000                                  
kfbh.spare1:                          0 ; 0x018: 0x00000000                                    kfbh.spare1:                          0 ; 0x018: 0x00000000                                  
kfbh.spare2:                          0 ; 0x01c: 0x00000000                                    kfbh.spare2:                          0 ; 0x01c: 0x00000000                                  
kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8                                      kfdhdb.driver.provstr:         ORCLDISK ; 0x000: length=8                                    
kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000                                    kfdhdb.driver.reserved[0]:            0 ; 0x008: 0x00000000                                  
kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000                                    kfdhdb.driver.reserved[1]:            0 ; 0x00c: 0x00000000                                  
kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000                                    kfdhdb.driver.reserved[2]:            0 ; 0x010: 0x00000000                                  
kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000                                    kfdhdb.driver.reserved[3]:            0 ; 0x014: 0x00000000                                  
kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000                                    kfdhdb.driver.reserved[4]:            0 ; 0x018: 0x00000000                                  
kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000                                    kfdhdb.driver.reserved[5]:            0 ; 0x01c: 0x00000000                                  
kfdhdb.compat:                186647040 ; 0x020: 0x0b200200                                    kfdhdb.compat:                186647040 ; 0x020: 0x0b200200                                  
kfdhdb.dsknum:                        0 ; 0x024: 0x0000                                        kfdhdb.dsknum:                        0 ; 0x024: 0x0000                                      
kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL                                 kfdhdb.grptyp:                        2 ; 0x026: KFDGTP_NORMAL                               
kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER                                 kfdhdb.hdrsts:                        3 ; 0x027: KFDHDR_MEMBER                               
kfdhdb.dskname:DATA_DM01_CD_00_DM01CEL01 ; 0x028: length=25                                    kfdhdb.dskname:DATA_DM01_CD_00_DM01CEL01 ; 0x028: length=25                                  
kfdhdb.grpname:               DATA_DM01 ; 0x048: length=9                                      kfdhdb.grpname:               DATA_DM01 ; 0x048: length=9                                    
kfdhdb.fgname:                DM01CEL01 ; 0x068: length=9                                      kfdhdb.fgname:                DM01CEL01 ; 0x068: length=9                                    
kfdhdb.capname:                         ; 0x088: length=0                                      kfdhdb.capname:                         ; 0x088: length=0                                    
kfdhdb.crestmp.hi:             32959025 ; 0x0a8: HOUR=0x11 DAYS=0x11 MNTH=0xa YEAR=0x7db       kfdhdb.crestmp.hi:             32959025 ; 0x0a8: HOUR=0x11 DAYS=0x11 MNTH=0xa YEAR=0x7db     
kfdhdb.crestmp.lo:           1716041728 ; 0x0ac: USEC=0x0 MSEC=0x22e SECS=0x24 MINS=0x19       kfdhdb.crestmp.lo:           1716041728 ; 0x0ac: USEC=0x0 MSEC=0x22e SECS=0x24 MINS=0x19     
kfdhdb.mntstmp.hi:             32989736 ; 0x0b0: HOUR=0x8 DAYS=0x11 MNTH=0x8 YEAR=0x7dd        kfdhdb.mntstmp.hi:             32989736 ; 0x0b0: HOUR=0x8 DAYS=0x11 MNTH=0x8 YEAR=0x7dd      
kfdhdb.mntstmp.lo:           1753117696 ; 0x0b4: USEC=0x0 MSEC=0x39d SECS=0x7 MINS=0x1a        kfdhdb.mntstmp.lo:           1753117696 ; 0x0b4: USEC=0x0 MSEC=0x39d SECS=0x7 MINS=0x1a      
kfdhdb.secsize:                     512 ; 0x0b8: 0x0200                                        kfdhdb.secsize:                     512 ; 0x0b8: 0x0200                                      
kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000                                        kfdhdb.blksize:                    4096 ; 0x0ba: 0x1000                                      
kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000                                    kfdhdb.ausize:                  4194304 ; 0x0bc: 0x00400000                                  
kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80                                    kfdhdb.mfact:                    454272 ; 0x0c0: 0x0006ee80                                  
kfdhdb.dsksize:                  108288 ; 0x0c4: 0x0001a700                                    kfdhdb.dsksize:                  108288 ; 0x0c4: 0x0001a700                                  
kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002                                    kfdhdb.pmcnt:                         2 ; 0x0c8: 0x00000002                                  
kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001                                    kfdhdb.fstlocn:                       1 ; 0x0cc: 0x00000001                                  
kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002                                    kfdhdb.altlocn:                       2 ; 0x0d0: 0x00000002                                  
kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002                                    kfdhdb.f1b1locn:                      2 ; 0x0d4: 0x00000002                                  
kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000                                        kfdhdb.redomirrors[0]:                0 ; 0x0d8: 0x0000                                      
kfdhdb.redomirrors[1]:               17 ; 0x0da: 0x0011                                        kfdhdb.redomirrors[1]:               17 ; 0x0da: 0x0011                                      
kfdhdb.redomirrors[2]:               30 ; 0x0dc: 0x001e                                        kfdhdb.redomirrors[2]:               30 ; 0x0dc: 0x001e                                      
kfdhdb.redomirrors[3]:            65535 ; 0x0de: 0xffff                                        kfdhdb.redomirrors[3]:            65535 ; 0x0de: 0xffff                                      
kfdhdb.dbcompat:              186647040 ; 0x0e0: 0x0b200200                                    kfdhdb.dbcompat:              186647040 ; 0x0e0: 0x0b200200                                  
kfdhdb.grpstmp.hi:             32959025 ; 0x0e4: HOUR=0x11 DAYS=0x11 MNTH=0xa YEAR=0x7db       kfdhdb.grpstmp.hi:             32959025 ; 0x0e4: HOUR=0x11 DAYS=0x11 MNTH=0xa YEAR=0x7db     
kfdhdb.grpstmp.lo:           1713335296 ; 0x0e8: USEC=0x0 MSEC=0x3db SECS=0x21 MINS=0x19       kfdhdb.grpstmp.lo:           1713335296 ; 0x0e8: USEC=0x0 MSEC=0x3db SECS=0x21 MINS=0x19     
kfdhdb.vfstart:                       0 ; 0x0ec: 0x00000000                                    kfdhdb.vfstart:                       0 ; 0x0ec: 0x00000000                                  
kfdhdb.vfend:                         0 ; 0x0f0: 0x00000000                                    kfdhdb.vfend:                         0 ; 0x0f0: 0x00000000                                  
kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000                                    kfdhdb.spfile:                        0 ; 0x0f4: 0x00000000                                  
kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000                                    kfdhdb.spfflg:                        0 ; 0x0f8: 0x00000000                                  
kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000                                    kfdhdb.ub4spare[0]:                   0 ; 0x0fc: 0x00000000                                  
kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000                                    kfdhdb.ub4spare[1]:                   0 ; 0x100: 0x00000000                                  
kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000                                    kfdhdb.ub4spare[2]:                   0 ; 0x104: 0x00000000                                  
kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000                                    kfdhdb.ub4spare[3]:                   0 ; 0x108: 0x00000000                                  
kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000                                    kfdhdb.ub4spare[4]:                   0 ; 0x10c: 0x00000000                                  
kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000                                    kfdhdb.ub4spare[5]:                   0 ; 0x110: 0x00000000                                  
kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000                                    kfdhdb.ub4spare[6]:                   0 ; 0x114: 0x00000000                                  
kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000                                    kfdhdb.ub4spare[7]:                   0 ; 0x118: 0x00000000                                  
kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000                                    kfdhdb.ub4spare[8]:                   0 ; 0x11c: 0x00000000                                  
kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000                                    kfdhdb.ub4spare[9]:                   0 ; 0x120: 0x00000000                                  
kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000                                    kfdhdb.ub4spare[10]:                  0 ; 0x124: 0x00000000                                  
kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000                                    kfdhdb.ub4spare[11]:                  0 ; 0x128: 0x00000000                                  
kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000                                    kfdhdb.ub4spare[12]:                  0 ; 0x12c: 0x00000000                                  
kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000                                    kfdhdb.ub4spare[13]:                  0 ; 0x130: 0x00000000                                  
kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000                                    kfdhdb.ub4spare[14]:                  0 ; 0x134: 0x00000000                                  
kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000                                    kfdhdb.ub4spare[15]:                  0 ; 0x138: 0x00000000                                  
kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000                                    kfdhdb.ub4spare[16]:                  0 ; 0x13c: 0x00000000                                  
kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000                                    kfdhdb.ub4spare[17]:                  0 ; 0x140: 0x00000000                                  
kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000                                    kfdhdb.ub4spare[18]:                  0 ; 0x144: 0x00000000                                  
kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000                                    kfdhdb.ub4spare[19]:                  0 ; 0x148: 0x00000000                                  
kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000                                    kfdhdb.ub4spare[20]:                  0 ; 0x14c: 0x00000000                                  
kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000                                    kfdhdb.ub4spare[21]:                  0 ; 0x150: 0x00000000                                  
kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000                                    kfdhdb.ub4spare[22]:                  0 ; 0x154: 0x00000000                                  
kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000                                    kfdhdb.ub4spare[23]:                  0 ; 0x158: 0x00000000                                  
kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000                                    kfdhdb.ub4spare[24]:                  0 ; 0x15c: 0x00000000                                  
kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000                                    kfdhdb.ub4spare[25]:                  0 ; 0x160: 0x00000000                                  
kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000                                    kfdhdb.ub4spare[26]:                  0 ; 0x164: 0x00000000                                  
kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000                                    kfdhdb.ub4spare[27]:                  0 ; 0x168: 0x00000000                                  
kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000                                    kfdhdb.ub4spare[28]:                  0 ; 0x16c: 0x00000000                                  
kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000                                    kfdhdb.ub4spare[29]:                  0 ; 0x170: 0x00000000                                  
kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000                                    kfdhdb.ub4spare[30]:                  0 ; 0x174: 0x00000000                                  
kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000                                    kfdhdb.ub4spare[31]:                  0 ; 0x178: 0x00000000                                  
kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000                                    kfdhdb.ub4spare[32]:                  0 ; 0x17c: 0x00000000                                  
kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000                                    kfdhdb.ub4spare[33]:                  0 ; 0x180: 0x00000000                                  
kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000                                    kfdhdb.ub4spare[34]:                  0 ; 0x184: 0x00000000                                  
kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000                                    kfdhdb.ub4spare[35]:                  0 ; 0x188: 0x00000000                                  
kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000                                    kfdhdb.ub4spare[36]:                  0 ; 0x18c: 0x00000000                                  
kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000                                    kfdhdb.ub4spare[37]:                  0 ; 0x190: 0x00000000                                  
kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000                                    kfdhdb.ub4spare[38]:                  0 ; 0x194: 0x00000000                                  
kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000                                    kfdhdb.ub4spare[39]:                  0 ; 0x198: 0x00000000                                  
kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000                                    kfdhdb.ub4spare[40]:                  0 ; 0x19c: 0x00000000                                  
kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000                                    kfdhdb.ub4spare[41]:                  0 ; 0x1a0: 0x00000000                                  
kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000                                    kfdhdb.ub4spare[42]:                  0 ; 0x1a4: 0x00000000                                  
kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000                                    kfdhdb.ub4spare[43]:                  0 ; 0x1a8: 0x00000000                                  
kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000                                    kfdhdb.ub4spare[44]:                  0 ; 0x1ac: 0x00000000                                  
kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000                                    kfdhdb.ub4spare[45]:                  0 ; 0x1b0: 0x00000000                                  
kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000                                    kfdhdb.ub4spare[46]:                  0 ; 0x1b4: 0x00000000                                  
kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000                                    kfdhdb.ub4spare[47]:                  0 ; 0x1b8: 0x00000000                                  
kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000                                    kfdhdb.ub4spare[48]:                  0 ; 0x1bc: 0x00000000                                  
kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000                                    kfdhdb.ub4spare[49]:                  0 ; 0x1c0: 0x00000000                                  
kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000                                    kfdhdb.ub4spare[50]:                  0 ; 0x1c4: 0x00000000                                  
kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000                                    kfdhdb.ub4spare[51]:                  0 ; 0x1c8: 0x00000000                                  
kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000                                    kfdhdb.ub4spare[52]:                  0 ; 0x1cc: 0x00000000                                  
kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000                                    kfdhdb.ub4spare[53]:                  0 ; 0x1d0: 0x00000000                                  
kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000                                    kfdhdb.acdb.aba.seq:                  0 ; 0x1d4: 0x00000000                                  
kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000                                    kfdhdb.acdb.aba.blk:                  0 ; 0x1d8: 0x00000000                                  
kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000                                        kfdhdb.acdb.ents:                     0 ; 0x1dc: 0x0000                                      
kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000                                        kfdhdb.acdb.ub2spare:                 0 ; 0x1de: 0x0000                                      
[grid@dm01db01 ~]$                                                                             [grid@dm01db01 ~]$                                                                           


经过检查,我们发现,这个规律在exadata上依然有效,ASM除了缺省4M的AU,其余没什么变化,O(∩_∩)O哈哈~

顺便介绍一个小方法,快速计算备份块的位置:(该方法根据ASM Support Guy修改而来)

ausize=`kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m | grep ausize | tr -s ' ' | cut -d' ' -f2`
blksize=`kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m | grep blksize | tr -s ' ' | cut -d' ' -f2`
let n=$ausize/$blksize-2
echo $n

[grid@dm01db01 ~]$ ausize=`kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m | grep ausize | tr -s ' ' | cut -d' ' -f2`
[grid@dm01db01 ~]$ 
[grid@dm01db01 ~]$ blksize=`kfed read o/192.168.10.3/DATA_DM01_CD_00_dm01cel01 ausz=4m | grep blksize | tr -s ' ' | cut -d' ' -f2`
[grid@dm01db01 ~]$ 
[grid@dm01db01 ~]$ let n=$ausize/$blksize-2
[grid@dm01db01 ~]$ echo $n
1022
[grid@dm01db01 ~]$ 
发表在 ASM, 内部机制 | 标签为 , , | 一条评论

exadata存储节点上的/etc/init.d/cell.d和celladmin

在cell节点上,系统启动时,Oracle 运行 /etc/init.d/cell.d。使用celladmin操作系统用户执行 /etc/init.d/cell.d,运行“alter cell startup services rs”脚本启动Restart 服务。
休眠1秒钟后,这个脚本运行“alter cell startup services all”来启动了所有其他的进程,包括CELLSRV和MS。
在/etc/init.d/cell.d脚本中存在一个检测机制,用以确定是否存在任何故障或不正确的配置。如果存在,Oracle会尝试从最近一次好的状态重新启动。
上述内容,在/etc/init.d/cell.d中相关内容如下:
#this is just to make sure that if MS was running then it is down (with/without RS)
su celladmin -c “. /etc/profile.d/cell_env.sh; cellcli -e \”alter cell startup services rs\”; sleep 1; cellcli -e \”alter cell shutdown services ms\””
perl $OSS_SCRIPTS_HOME/msCheck.pl stop_ms

这个脚本很重要,曾经在一次升级Exadata的image过程中,我们发现一个cell升级失败,根据报错信息,结合升级脚本patchmgr,发现该脚本实际是调用了dostep.sh脚本,报错的一段内容如下:


check_cell_services ()
{
  local -i ret_code=0
  local -i running_services=0
 
  running_services=`service celld status | grep -i running | wc -l`
  
  if [ $running_services -lt 3 ] && [ "X$g_rolling" == "Xnon_rolling" ]; then
    service celld restart > /dev/null 2>/dev/null
    running_services=`service celld status | grep -i running | wc -l`
  fi
  if [ $running_services -lt 3 ]; then
    echo "[ERROR] Can not continue. All 3 cell services do not seem to be up or able to come up."
    ret_code=1
  fi
  return $ret_code
}

也就是说,因为cell的3个核心服务起不来,因此,不能继续升级。
那么手工使用“service celld restart”命令启动下,并检查状态“service celld status”,具体如下:
手工启动还报如下错误:

[root@dm01cel02 ~]# service celld start

CELL-01526: Local hostname mapping is inconsistent.  Verify cell /etc/hosts file content.
[root@dm01cel02 ~]# 

Exadata的/etc/hosts通常是安装时根据onecommand脚本自动写入的一些内部互联和交换机等的IP信息,其他的ip是通过DNS解析的,况且既然可以安装成功,后期无人更改,/etc/hosts文件应该没有什么问题。
事实上,检查后,/etc/hosts文件的确没有什么异常。

尝试重启,报告权限错误:

[root@dm01cel02 ~]# service celld restart

Stopping the RS, CELLSRV, and MS services...
CELL-01509: Restart Server (RS) not responding.
Starting the RS, CELLSRV, and MS services...
CELL-01512: Cannot start a new Restart Server (RS).  Exception received: Permission denied
[root@dm01cel02 ~]# 
[root@dm01cel02 ~]# 

检查信息,发现3个关键服务都起不来:

[root@dm01cel02 ~]# service celld status
         rsStatus:               stopped
         msStatus:               unknown
         cellsrvStatus:          unknown
[root@dm01cel02 ~]# 

对比其他升级成功的节点,发现celladmin的权限如下:

[celladmin@dm01cel05 ~]$ id celladmin
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers) context=root:system_r:unconfined_t:s0-s0:c0.c1023
[celladmin@dm01cel05 ~]$ 

而鼓掌的cell节点,celladmin用户权限如下:

[celladmin@dm01cel02 ~]$ id
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers)
[celladmin@dm01cel02 ~]$ id celladmin
uid=1000(celladmin) gid=500(celladmin) groups=500(celladmin),502(cellusers)
[celladmin@dm01cel02 ~]$ 

在网上查了下资料,没找到什么有用信息,也尝试修改了下权限,没有解决问题,升级时间窗口有限,因此使用strace进行跟踪:
strace -fo /tmp/service_celld_start.log “service celld start”

我们发现,实际上“service celld start”是调用了/etc/init.d/cell.d中的如下一段内容:

start()
{
    dynamic_deploy
    su celladmin -c ". /etc/profile.d/cell_env.sh; cellcli -e \"alter cell startup services all\""
}

同理,“service celld stop”是调用了:

stop()
{
    su celladmin -c ". /etc/profile.d/cell_env.sh; cellcli -e \"alter cell shutdown services all\""
}

“service celld restart”是调用了:

restart()
{
    dynamic_deploy
    su celladmin -c ". /etc/profile.d/cell_env.sh; cellcli -e \"alter cell restart services all\"" 
}

“service celld stop”是调用了:

status()
{
    su celladmin -c ". /etc/profile.d/cell_env.sh; cellcli -e \"list cell attributes rsStatus, msStatus, cellsrvStatus detail\""
}

不难看出,cell的服务是有celladmin用户来完成的,那么我们使用celladmin用户手工执行以下试试,看看为什么上面会出现“Exception received: Permission denied”的报错信息:

[root@dm01cel02 trace]# su - celladmin
[celladmin@dm01cel02 ~]$ sh -x cellcli -e alter cell restart services all
+ PS=/bin/ps
+ GREP=/bin/grep
+ SLEEP=/bin/sleep
+ ECHO=/bin/echo
+ MYPID=9280
+ MYPPID=9128
+ /bin/ps -fp 9128
+ /bin/grep @notty
+ /bin/grep ' sshd: '
+ '[' 1 -eq 0 ']'
+ SSHD_NOTTY_PARENT=0
+ [[ -z 1 ]]
+ [[ -z 1 ]]
+ [[ -z 1 ]]
+ JRE_HOME=/usr/java/jdk1.5.0_15/
+ JLINE=jline.ConsoleRunner
+ CMDECHO=0
+ AWK=/bin/awk
+ DEPENDENT_JARS_DIR=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies
+ '[' '' -a '' '!=' '' ']'
+ for arg in '$*'
+ [[ -e == \-\n ]]
+ [[ -e == \-\m ]]
+ [[ -e == \-\e ]]
+ JLINE=
+ for arg in '$*'
+ [[ alter == \-\n ]]
+ [[ alter == \-\m ]]
+ [[ alter == \-\e ]]
+ for arg in '$*'
+ [[ cell == \-\n ]]
+ [[ cell == \-\m ]]
+ [[ cell == \-\e ]]
+ for arg in '$*'
+ [[ restart == \-\n ]]
+ [[ restart == \-\m ]]
+ [[ restart == \-\e ]]
+ for arg in '$*'
+ [[ services == \-\n ]]
+ [[ services == \-\m ]]
+ [[ services == \-\e ]]
+ for arg in '$*'
+ [[ all == \-\n ]]
+ [[ all == \-\m ]]
+ [[ all == \-\e ]]
+ pfile=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/cellinit.ora
++ /bin/grep HTTP_PORT /opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/cellinit.ora
++ /bin/awk '{print substr($0,11)}'
+ HTTP_PORT=8888
+ CELLCLI_JAVACMD='/usr/java/jdk1.5.0_15//bin/java -client -Dpid=9280 -classpath  /opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/jline-0.9.9.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/lib/ossmgmt-cli.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/wsclient_extended.jar -Djava.util.logging.config.file=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/jse-logging.properties  oracle.ossmgmt.ms.cli.CellCLI 8888 -e alter cell restart services all'
+ '[' 0 -ne 0 ']'
+ '[' 0 -eq 1 ']'
+ /usr/java/jdk1.5.0_15//bin/java -client -Dpid=9280 -classpath /opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/jline-0.9.9.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/lib/ossmgmt-cli.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/wsclient_extended.jar -Djava.util.logging.config.file=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/jse-logging.properties oracle.ossmgmt.ms.cli.CellCLI 8888 -e alter cell restart services all

Stopping the RS, CELLSRV, and MS services...
CELL-01509: Restart Server (RS) not responding.
Starting the RS, CELLSRV, and MS services...
CELL-01512: Cannot start a new Restart Server (RS).  Exception received: Permission denied
[celladmin@dm01cel02 ~]$ 

这里不难看出,的确重现了错误信息:
Stopping the RS, CELLSRV, and MS services…
CELL-01509: Restart Server (RS) not responding.
Starting the RS, CELLSRV, and MS services…
CELL-01512: Cannot start a new Restart Server (RS). Exception received: Permission denied

那么为什么权限被禁止呢? 我们换成root用户试试看:

[root@dm01cel02 trace]# id
uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel)
[root@dm01cel02 trace]# sh -x cellcli -e alter cell restart services all
+ PS=/bin/ps
+ GREP=/bin/grep
+ SLEEP=/bin/sleep
+ ECHO=/bin/echo
+ MYPID=4356
+ MYPPID=8533
+ /bin/ps -fp 8533
+ /bin/grep ' sshd: '
+ /bin/grep @notty
+ '[' 1 -eq 0 ']'
+ SSHD_NOTTY_PARENT=0
+ [[ -z 1 ]]
+ [[ -z 1 ]]
+ [[ -z 1 ]]
+ JRE_HOME=/usr/java/jdk1.5.0_15/
+ JLINE=jline.ConsoleRunner
+ CMDECHO=0
+ AWK=/bin/awk
+ DEPENDENT_JARS_DIR=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies
+ '[' '' -a '' '!=' '' ']'
+ for arg in '$*'
+ [[ -e == \-\n ]]
+ [[ -e == \-\m ]]
+ [[ -e == \-\e ]]
+ JLINE=
+ for arg in '$*'
+ [[ alter == \-\n ]]
+ [[ alter == \-\m ]]
+ [[ alter == \-\e ]]
+ for arg in '$*'
+ [[ cell == \-\n ]]
+ [[ cell == \-\m ]]
+ [[ cell == \-\e ]]
+ for arg in '$*'
+ [[ restart == \-\n ]]
+ [[ restart == \-\m ]]
+ [[ restart == \-\e ]]
+ for arg in '$*'
+ [[ services == \-\n ]]
+ [[ services == \-\m ]]
+ [[ services == \-\e ]]
+ for arg in '$*'
+ [[ all == \-\n ]]
+ [[ all == \-\m ]]
+ [[ all == \-\e ]]
+ pfile=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/cellinit.ora
++ /bin/grep HTTP_PORT /opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/cellinit.ora
++ /bin/awk '{print substr($0,11)}'
+ HTTP_PORT=8888
+ CELLCLI_JAVACMD='/usr/java/jdk1.5.0_15//bin/java -client -Dpid=4356 -classpath  /opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/jline-0.9.9.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/lib/ossmgmt-cli.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/wsclient_extended.jar -Djava.util.logging.config.file=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/jse-logging.properties  oracle.ossmgmt.ms.cli.CellCLI 8888 -e alter cell restart services all'
+ '[' 0 -ne 0 ']'
+ '[' 0 -eq 1 ']'
+ /usr/java/jdk1.5.0_15//bin/java -client -Dpid=4356 -classpath /opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/jline-0.9.9.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/lib/ossmgmt-cli.jar:/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/dependencies/wsclient_extended.jar -Djava.util.logging.config.file=/opt/oracle/cell11.2.2.4.0_LINUX.X64_110929/cellsrv/deploy/config/jse-logging.properties oracle.ossmgmt.ms.cli.CellCLI 8888 -e alter cell restart services all

Stopping the RS, CELLSRV, and MS services...
CELL-01509: Restart Server (RS) not responding.
Starting the RS, CELLSRV, and MS services...
Getting the state of RS services... 
 running
Starting CELLSRV services...
The STARTUP of CELLSRV services was successful.
Starting MS services...
The STARTUP of MS services was successful.
[root@dm01cel02 trace]# 

这里可以看到,root用户执行是成功的。

由于升级的窗口时间有限,本次,没有找到为什么celladmin用户权限被禁止了,但是可以肯定的是celladmin这用户在这个cell上已经失效了,因此我们做了如下此措施:
1,修改这个cell的/etc/init.d/cell.d脚本,将其中”su celladmin”的部分替换掉,直接使用root执行例如:
将下面的内容:

start()
{
    dynamic_deploy
    su celladmin -c ". /etc/profile.d/cell_env.sh; cellcli -e \"alter cell startup services all\""
}

修改为:

start()
{
    dynamic_deploy
    cellcli -e "alter cell startup services all"
}

就这样,相关的start,stop,restart,status四个内容都进行了替换,升级成功了,后期,客户也只能使用root来维护了。

具体celladmin的权限,没有继续跟进了,但是已经不影响使用了(其他系统安全等问题,暂不考虑)。
但是现在回想下,当时如果有时间,参考一下onecommand的安装脚本中创建用户等相关的脚步,重建celladmin用户,不知道是否可以解决这个问题……
Anyway,下次选择更多了,O(∩_∩)O哈哈~

发表在 内部机制 | 标签为 , , | 留下评论

整理了一下以前老blog中的exadata介绍的link,顺便更新下Exadata X4的内容

最近有些小伙伴问起exadata的一些古老问题,这里总结下,顺便根据白皮书小小的更新一下X4的东东(迄今为止,我还没见过X4的真神,O(∩_∩)O哈哈~),期待年后的第一个X4项目中。。。:

最新的Exadata版本
2008年Oracle推出业界第一个全新架构的设计Exadata V1,以满配为例:内存 256G,64Core,没有flashcache

2009年到2010年年底之前,Exadata硬件是V2系列的多,image版本是11.2.2.x比较多,2011年,image版本11.2.2.4.2多(一般都升级上到这个了,相对稳定),第一次在硬件中增加了Flashcache组件,可以提高OLTP的处理能力,V2的硬件,以满配为例:64G内存,576G,内存,Flash容量5.3T

2011年和2012年的主要的exadata硬件是Exadata X2,image主要版本是11.2.3.1.0和11.2.3.1.1增加了磁盘容量(504T),CPU 96core和内存1T

2012年底推出Exadata X3系列,2013年年初随着Exadata X3系列的推出,image升级为11.2.3.2.0和11.2.3.2.1(这个是目前主流的11.2.3.2.x的版本),最显著的软件特征是WBFC(WriteBack FlashCache)。硬件增加了大幅增加了Flashcache的容量(22.4T),以及CPU 128core和内存2T

2013年底,Oracle推出了Exadata X4系列,现在最新的image版本已经是11.2.3.3.0了,最新的硬件是Exadata X4-2,硬件再次升级以满配为例:flashcahe 44.8T,CPU 192 core,内存4T,同时,IB网络的连接方式从Active-Backup到Active-Active(带宽40G/b 升级到 80Gb/b)

软件的更新(image 11.2.3.3.0):
1,的在1/4 Rack和1/8 Rack的转换,从以前的好几条命令,封装成1一条命令(alter cell eighthRack=TRUE)
2,带有Automatic Flash compression功能(一条命令而已,相同image下,X3跟X4 硬件,命令稍微有点区别,alter cell flashCompress=TRUE和alter cell FlashCacheCompX3Support= TRUE)
3,在线替换磁盘控制器电源(Disk Controller Battery),软件层面也是一条命令而已
还要注意,X4的互联方式大大改变了,因为X4缺省是没有Spine Switch的

SAP on Exadata 的安装(猜想版)
我知道的一点Exadata测试常用的方法(这里主要讨论针对DW的场景)
Exadata和普通的pc server的dataguard配置(VM)
Exalogic和Exadata的互联(1/4 rack)
让Exadata使用SDP协议(主要是Exalogic访问Exadata)
使用远程终端看kvm的信息
Exadata从A客户搬到B客户,需要做什么
Exadata健康检查与评估
Exadata安装步骤
Exadata的出厂预设的IP地址
Exadata的onecommand
Exadata 的 Flash Cache的威力
配置一台Exadata需要多少个IP地址?
Exadata官方文档的位置
Exadata需要的防火墙端口

发表在 FAQ | 标签为 | 留下评论

艰难的修复数据库过程,却发现Oracle 11.2果然强大

具体参见:
http://t.askmaclean.com/thread-3790-1-1.html
http://www.itpub.net/thread-1839128-1-1.html

纯属自娱自乐,没有实际意义的,顺便说下我的发现和测试中的发现(虽然到现在为止数据库还没有open,我还会继续鼓捣他,毕竟还有些方法还没用上,O(∩_∩)O哈哈~。。。);
首先就是11.2太强大了,很多时候以往的错误都可以fixed,数据库可以open后,做很多跟损坏先关的check
1, 从11.2开始,控制文件自动备份完成的信息由m000完成,且他还完成很多其余工作,当然,只在别的进程触发的时候,他才会去工作
2,DBA_TABLESPACE和V$TABLESPACE的来源不同,一个来源于控制文件,一个来源于基表ts$
3,ts$和file$不能跳号
4,DBMS_HM很强大(Health Manager),他会定期检查数据库的很多东西,然后让m000进程写trace
。。。

主要的测试步骤已经不太都记得了,但是主要模拟步骤如下:
我的环境: db 11.2.0.3 OEL 5.8
1,创建2个表空间(其实1个也可以,几个都行,为了看得更加清晰),然后切换日志,然后在OS上讲包括UNDOTBS在内的这些数据文件(普通的数据文件和undo的数据文件就可以)
2,启动数据库的时候,你会发现,报错说文件丢失或者损坏,这时你offline drop掉这个报错的文件,数据库应该就可以打开,当然后台有m000生成的trace,HealthManager会不断的触发m000把所有其余随坏的信息都写入trace
3,想办法清理undo$中的问题回滚段
4,创建新的普通数据的表空间,例如“UNDTBS333”,但是设置UNDO_TABLESPACE=这个普通的表空间(scope=spfile),然后启动数据库————–这时我当时的第一个误操作
5,数据库报错,具体忘记了,怎么解决的也忘记了,印象中无非就是undo的隐含参数等等,然后创建正确的undo表空间(create undo tablespace …)给数据库使用
6,解决后,数据库可以正常open,delete from fs$ where name=‘你曾经误操作的那个普通数据表空间 UNDTBS333’,这么做是因为当时我没有仔细考虑风险,因此,手工清理了
其实如果全部都是手工做的话,也可以的,后面发现了需要手工清理什么,但是当时确实么有想太多,误操作了。。。
7,其实这样数据库也还是可以open的,没有太大问题,我用DBMS_HM.RUN_CHECK(‘Dictionary Integrity Check’, ‘lunar-ck-Dict’)检查,其实这时数据库只有undo$, ts$ 和file$数据不一致,没有影响其他数据对象(因为测试过程没有添加用户测试数据)
但是我脑子一热,进行了第二个误操作:重建控制文件
因为当时发现v$tablespace的内容显示不正确,但是dba_tablespaces显示是正确的。通过查询他们定义,发现v$tablespace的数据是源于x$kccts这个基表,也就是来源于控制文件的信息,而dba_tablespaces是基于ts$的
于是我就萌发了重建控制文件这个愚蠢的想法。。。。
8,重建控制文件的过程本身就报错了,你懂得。。。当然,后来可以屏蔽这个问题,忘记具体细节了,反正不难
9,再也没有拉起来数据库了
。。。。。

这个过程,通过10046,发现下面的语句报错,通过检查oracle二进制文件,可以很容易发现,这是因为数据库open的时候,会先把对file$进行插入操作,也就是根据bootstrap$找到file$的块,然后把全部内容插入这个表(当然状态为删除的会在接下来的步骤再做一个delete):
select blocks,NVL(ts#,-1),status$,NVL(relfile#,0),maxextend,inc, crscnwrp,crscnbas,NVL(spare1,0) from file$ where file#=:1

于是我想到了使用bbed修改file$,把我delete from fs$的那个表空间的数据文件修改为删除状态,经过测试,发现ts$和file$都不能跳号,比如这个file#是3,如果你把他的状态置为删除,那么重建控制文件的时候,后面多有的file#>3的都会在做一致性校验的时候,删除掉。。。

然后我只能还原了file$,然后就考虑将fs$中delete的记录恢复回来,但是这个我一直没找到好方法。。。
还有一个可以通过10046跟踪,找到报错语句,上面的问题很好解决,这个解决完了,是下面的错误:
select name,online$,contents$,undofile#,undoblock#,blocksize,dflmaxext,dflinit,dflincr,dflextpct,dflminext, dflminlen, owner#,scnwrp,scnbas, NVL(pitrscnwrp, 0), NVL(pitrscnbas, 0), dflogging, bitmapped, inc#, flags, plugged, NVL(spare1,0), NVL(spare2,0), affstrength from ts$ where ts#=:1
这个我一致没有绕过去,尝试修改oracle二进制文件应该是个方法,但是我用的不熟练,另外这个是最后一招,我总感觉现在还没有到非要这么做的时候。。。

于是就想办法跳过字典检查,使用gdb可以跳过数据字典检查:

(gdb) commands 
Type commands for when breakpoint 1 is hit, one per line.
End with a line saying just "end".
>set *0x60023388=0x0 
>cont 
>end 
(gdb) cont 
Continuing.

Breakpoint 1, 0x00000000022370e0 in kcfckdb ()

Program received signal SIGSEGV, Segmentation fault.
0x0000000002cbe19f in slaac_int ()
(gdb) 

然后再次open resetlog数据库的时候:

Sat Nov 30 12:03:37 2013
ARC3 started with pid=21, OS id=29788 
Undo initialization finished serial:0 start:127664534 end:127664564 diff:30 (0 seconds)
Verifying file header compatibility for 11g tablespace encryption..
Errors in file /u01/app/oracle/diag/rdbms/bb/bb/trace/bb_ora_28960.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00376: file 1 cannot be read at this time
ORA-01110: data file 1: '/stage/system01.dbf'
Errors in file /u01/app/oracle/diag/rdbms/bb/bb/trace/bb_ora_28960.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00376: file 1 cannot be read at this time
ORA-01110: data file 1: '/stage/system01.dbf'
Error 604 happened during db open, shutting down database
USER (ospid: 28960): terminating the instance due to error 604
Sat Nov 30 12:08:38 2013
USER (ospid: 29792): terminating the instance due to error 604
Termination issued to instance processes. Waiting for the processes to exit
Sat Nov 30 12:08:48 2013
Instance termination failed to kill one or more processes

因此我需要知道一个断点,然后怎么设置那个断点,让数据库不做 “Verifying file header compatibility for 11g tablespace encryption”

发表在 backup&recovery, Internal, Oracle 11.1 & Oracle11.2 | 标签为 , | 留下评论

exadata HC-检查是否有硬盘需要更换

在做exadata的检查的时候,我们通常收集如下信息:
1,exachk
2,sundiag
3,diagcollect(GI版本从11.2.0.4.x开始, 可以使用TFA Collector)
4,awr
5,db节点和cell节点的alert
6,osw
根据上述检查内容是否存在异常可能还需要 CheckHWnFWProfile等等。。。。

本文主要分析如何识别磁盘损坏的内容。

++++++++++++++++++++++++++查看cell 的alert,检查是否有磁盘需要更换的信息:
检查cell的alert告警信息:
dcli -g cell_group -l root “cellcli -e list alerthistory”

查看关键内容:

grep "PREDICTIVE FAILURE" cell-alerthistory.txt
grep "Logical drive status changed" cell-alerthistory.txt
grep "NOT PRESENT" cell-alerthistory*.txt
grep "POOR PERFORMANCE" cell-alerthistory*.txt
grep "critical" cell-alerthistory.txt

例如:

[root@lunar tmp]# grep "NOT PRESENT" cell-alerthistory*.txt
cell-alerthistory1.txt:dm01cel05: 44_1  2013-08-13T16:13:07+08:00       critical        "Hard disk was removed.  Status        : NOT PRESENT  Manufacturer  : HITACHI  Model Number  : HUS1560SCSUN600G  Size          : 600G  Serial Number : 1216KLMTLN  Firmware      : A700  Slot Number   : 0  Cell Disk     : CD_00_dm01cel05  Grid Disk     : RECO_DM01_CD_00_dm01cel05, DATA_DM01_CD_00_dm01cel05"
cell-alerthistory1.txt:dm01cel05: 47_1  2013-09-24T00:16:46+08:00       critical        "Hard disk was removed.  Status        : NOT PRESENT  Manufacturer  : HITACHI  Model Number  : HUS1560SCSUN600G  Size          : 600G  Serial Number : 1216KLMTLN  Firmware      : A700  Slot Number   : 0  Cell Disk     : CD_00_dm01cel05  Grid Disk     : RECO_DM01_CD_00_dm01cel05, DATA_DM01_CD_00_dm01cel05"
cell-alerthistory1.txt:dm01cel05: 48_1  2013-09-25T04:45:31+08:00       critical        "Hard disk was removed.  Status        : NOT PRESENT  Manufacturer  : HITACHI  Model Number  : HUS1560SCSUN600G  Size          : 600G  Serial Number : 1216KLMTLN  Firmware      : A700  Slot Number   : 0  Cell Disk     : CD_00_dm01cel05  Grid Disk     : RECO_DM01_CD_00_dm01cel05, DATA_DM01_CD_00_dm01cel05"
cell-alerthistory1.txt:dm01cel05: 56_1  2013-10-25T16:29:21+08:00       critical        "Hard disk was removed.  Status        : NOT PRESENT  Manufacturer  : HITACHI  Model Number  : HUS1560SCSUN600G  Size          : 600G  Serial Number : 1216KLMTLN  Firmware      : A700  Slot Number   : 0  Cell Disk     : CD_00_dm01cel05  Grid Disk     : RECO_DM01_CD_00_dm01cel05, DATA_DM01_CD_00_dm01cel05"
cell-alerthistory1.txt:dm01cel05: 57_1  2013-10-28T01:26:58+08:00       critical        "Hard disk was removed.  Status        : NOT PRESENT  Manufacturer  : HITACHI  Model Number  : HUS1560SCSUN600G  Size          : 600G  Serial Number : 1216KLMTLN  Firmware      : A700  Slot Number   : 0  Cell Disk     : CD_00_dm01cel05  Grid Disk     : RECO_DM01_CD_00_dm01cel05, DATA_DM01_CD_00_dm01cel05"
cell-alerthistory1.txt:dm01cel05: 59_1  2013-10-30T10:24:27+08:00       critical        "Hard disk was removed.  Status        : NOT PRESENT  Manufacturer  : HITACHI  Model Number  : HUS1560SCSUN600G  Size          : 600G  Serial Number : 1216KLMTLN  Firmware      : A700  Slot Number   : 0  Cell Disk     : CD_00_dm01cel05  Grid Disk     : RECO_DM01_CD_00_dm01cel05, DATA_DM01_CD_00_dm01cel05"
[root@lunar tmp]# 

+++++++++++++++++++++++++++看sundiag的信息:
收集sundiag信息后,你会发现,每个db节点和cell节点的文件非常多,包括RAID,HCA, Infiniband,。。。等等
例如:

[root@lunar sundiag_2013_11_21_13_33]# ls
alert.log                                         dm01cel05_megacli64-AdpAllInfo_2013_11_21_13_33.out
CmdTool.log                                       dm01cel05_megacli64-BbuCmd_2013_11_21_13_33.out
dm01cel05_alerthistory_2013_11_21_13_33.out       dm01cel05_megacli64-CfgDsply_2013_11_21_13_33.out
dm01cel05_aurasmart_2013_11_21_13_33.out          dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out
dm01cel05_cell-detail_2013_11_21_13_33.out        dm01cel05_megacli64-GetEvents-all_2013_11_21_13_33.out
dm01cel05_celldisk-detail_2013_11_21_13_33.out    dm01cel05_megacli64-LdInfo_2013_11_21_13_33.out
dm01cel05_disk_devices_2013_11_21_13_33.out       dm01cel05_megacli64-LdPdInfo_2013_11_21_13_33.out
dm01cel05_dmesg_2013_11_21_13_33.out              dm01cel05_megacli64-PdList_long_2013_11_21_13_33.out
dm01cel05_fdisk-l_2013_11_21_13_33.out            dm01cel05_megacli64-PdList_short_2013_11_21_13_33.out
dm01cel05_fdom-l_2013_11_21_13_33.out             dm01cel05_megacli64-status_2013_11_21_13_33.out
dm01cel05_flashcache-detail_2013_11_21_13_33.out  dm01cel05_physicaldisk-detail_2013_11_21_13_33.out
dm01cel05_griddisk-detail_2013_11_21_13_33.out    dm01cel05_physicaldisk-fail_2013_11_21_13_33.out
dm01cel05_imageinfo-all_2013_11_21_13_33.out      dm01cel05_scripts-aura_2013_11_21_13_33.out
dm01cel05_lspci_2013_11_21_13_33.out              dm01cel05_sel-list_2013_11_21_13_33.out
dm01cel05_lspci-xxxx_2013_11_21_13_33.out         MegaSAS.log
dm01cel05_lsscsi_2013_11_21_13_33.out             messages
dm01cel05_lun-detail_2013_11_21_13_33.out         ms-odl.trc
[root@lunar sundiag_2013_11_21_13_33]# 

针对磁盘损坏信息,主要检查如下内容:

grep "Failed Disks" * 
grep "Predictive Failure Count" * 
grep "warning" * 
grep "Error Count" *|grep -v "Error Count: 0"
grep "I/O error" * 
grep "Unconfigured" * 

—————–检查坏盘:

[root@lunar sundiag_2013_11_21_13_33]# grep "Failed Disks" *
dm01cel05_megacli64-AdpAllInfo_2013_11_21_13_33.out:  Failed Disks    : 0 
dm01cel05_megacli64-status_2013_11_21_13_33.out:Failed Disks : 0
MegaSAS.log:  Failed Disks    : 0 
MegaSAS.log:  Failed Disks    : 0 
MegaSAS.log:  Failed Disks    : 0 
MegaSAS.log:  Failed Disks    : 0 
[root@lunar sundiag_2013_11_21_13_33]# 

———————检查报告了“先兆失效”的盘:

[root@lunar sundiag_2013_11_21_13_33]# grep "Predictive Failure Count" * 
dm01cel05_megacli64-CfgDsply_2013_11_21_13_33.out:Predictive Failure Count: 0

dm01cel05_megacli64-CfgDsply_2013_11_21_13_33.out:Predictive Failure Count: 0
.......................
dm01cel05_megacli64-LdPdInfo_2013_11_21_13_33.out:Predictive Failure Count: 0
dm01cel05_megacli64-LdPdInfo_2013_11_21_13_33.out:Predictive Failure Count: 0
dm01cel05_megacli64-LdPdInfo_2013_11_21_13_33.out:Predictive Failure Count: 0
dm01cel05_megacli64-PdList_long_2013_11_21_13_33.out:Predictive Failure Count: 0
dm01cel05_megacli64-PdList_long_2013_11_21_13_33.out:Predictive Failure Count: 0
.......................

dm01cel05_megacli64-PdList_long_2013_11_21_13_33.out:Predictive Failure Count: 0
dm01cel05_megacli64-PdList_long_2013_11_21_13_33.out:Predictive Failure Count: 0
MegaSAS.log:Predictive Failure Count: 0
MegaSAS.log:Predictive Failure Count: 0
.......................
MegaSAS.log:Predictive Failure Count: 0
MegaSAS.log:Predictive Failure Count: 0
[root@lunar sundiag_2013_11_21_13_33]

———-检查告警的磁盘信息:

[root@lunar sundiag_2013_11_21_13_33]# grep "warning" * 
dm01cel05_alerthistory_2013_11_21_13_33.out:     61_1    2013-11-08T02:44:22+08:00       warning         "Hard disk entered confinement offline status. The LUN 0_8 changed status to warning - confinedOffline. CellDisk changed status to normal - confinedOffline. All subsequent I/Os on this disk are failed immediately. Confinement tests will be run on the disk to determine if the disk should be dropped. Status                      : WARNING - CONFINEDOFFLINE  Manufacturer                : HITACHI  Model Number                : HUS1560SCSUN600G  Size                        : 600G  Serial Number               : 1216KLN0HN  Firmware                    : A700  Slot Number                 : 8  Cell Disk                   : CD_08_dm01cel05  Grid Disk                   : RECO_DM01_CD_08_dm01cel05, DBFS_DG_CD_08_dm01cel05, DATA_DM01_CD_08_dm01cel05  Reason for confinement      : threshold for service time exceeded"
dm01cel05_cell-detail_2013_11_21_13_33.out:      notificationPolicy:     critical,warning,clear
dm01cel05_dmesg_2013_11_21_13_33.out:warning: `ntpdate' uses 32-bit capabilities (legacy support in use)
messages:Jun 22 16:58:56 dm01cel05 kernel: warning: `dbus-daemon' uses 32-bit capabilities (legacy support in use)
messages:Jun 22 17:02:03 dm01cel05 setfiles: labeling /usr/share/snmp/mib2c-data/m2c-internal-warning.m2i to system_u:object_r:usr_t:s0 
messages:Jun 22 17:02:05 dm01cel05 setfiles: labeling /usr/share/man/man3/warnings.3pm.gz to system_u:object_r:man_t:s0 
messages:Jun 22 17:02:05 dm01cel05 setfiles: labeling /usr/share/man/man3/warnings::register.3pm.gz to system_u:object_r:man_t:s0 
messages:Jun 22 17:02:17 dm01cel05 setfiles: relabeling /usr/share/swig/1.3.29/swigwarnings.swg from root:object_r:file_t:s0 to system_u:object_r:usr_t:s0 
messages:Jun 22 17:02:19 dm01cel05 setfiles: relabeling /usr/lib/perl5/5.8.8/warnings from root:object_r:file_t:s0 to system_u:object_r:lib_t:s0 
messages:Jun 22 17:02:19 dm01cel05 setfiles: labeling /usr/lib/perl5/5.8.8/warnings/register.pm to system_u:object_r:lib_t:s0 
messages:Jun 22 17:02:19 dm01cel05 setfiles: labeling /usr/lib/perl5/5.8.8/warnings.pm to system_u:object_r:lib_t:s0 
messages:Jun 22 17:02:21 dm01cel05 setfiles: labeling /usr/lib64/perl5/5.8.8/x86_64-linux-thread-multi/CORE/warnings.h to system_u:object_r:lib_t:s0 
messages:Jun 22 17:02:22 dm01cel05 setfiles: labeling /usr/lib64/python2.4/warnings.pyo to system_u:object_r:lib_t:s0 
messages:Jun 22 17:02:22 dm01cel05 setfiles: labeling /usr/lib64/python2.4/warnings.pyc to system_u:object_r:lib_t:s0 
messages:Jun 22 17:02:23 dm01cel05 setfiles: labeling /usr/lib64/python2.4/warnings.py to system_u:object_r:lib_t:s0 
messages:Jun 22 17:13:44 dm01cel05 kernel: warning: `ntpdate' uses 32-bit capabilities (legacy support in use)
.......................
messages:Nov  1 19:57:36 dm01cel05 kernel: warning: `ntpdate' uses 32-bit capabilities (legacy support in use)
ms-odl.trc:[2013-11-08T02:44:22.980+08:00] [ossmgmt] [NOTIFICATION] [] [ms.core.MSAlertHistory] [tid: 17] [ecid: 10.48.27.14:63987:1383849862863:10,0] AlertHistory 61_1 created. Severity: warning. Message: Hard disk entered confinement offline status. The LUN 0_8 changed status to warning - confinedOffline. CellDisk changed status to normal - confinedOffline. All subsequent I/Os on this disk are failed immediately. Confinement tests will be run on the disk to determine if the disk should be dropped.<exadata:br/>Status                      : WARNING - CONFINEDOFFLINE <exadata:br/>Manufacturer                : HITACHI <exadata:br/>Model Number                : HUS1560SCSUN600G <exadata:br/>Size                        : 600G <exadata:br/>Serial Number               : 1216KLN0HN <exadata:br/>Firmware                    : A700 <exadata:br/>Slot Number                 : 8 <exadata:br/>Cell Disk                   : CD_08_dm01cel05 <exadata:br/>Grid Disk                   : RECO_DM01_CD_08_dm01cel05, DBFS_DG_CD_08_dm01cel05, DATA_DM01_CD_08_dm01cel05 <exadata:br/>Reason for confinement      : threshold for service time exceeded
ms-odl.trc:[2013-11-08T02:48:03.652+08:00] [ossmgmt] [NOTIFICATION] [] [ms.hwadapter.diskadp.MSLUNImpl] [tid: 19] [ecid: 10.48.27.14:63987:1383849872786:12,0] Called reenableLun for: 0 LUN: 0_8 lunOSName: /dev/sdi phys: 20:8 slotNumber: 8 lunStatus: warning reenableForce: true
ms-odl.trc:[2013-11-08T02:48:03.859+08:00] [ossmgmt] [NOTIFICATION] [] [ms.hwadapter.diskadp.MSLUNImpl] [tid: 19] [ecid: 10.48.27.14:63987:1383849872786:12,0] LUN 0_8 is in state warning and is not a system disk. No further action is required at this time.[[
[root@lunar sundiag_2013_11_21_13_33]# 


[root@lunar sundiag_2013_11_21_13_33]# grep "Error Count" *|grep -v "Error Count: 0"
dm01cel05_megacli64-AdpAllInfo_2013_11_21_13_33.out:                Error Counters
dm01cel05_megacli64-CfgDsply_2013_11_21_13_33.out:Media Error Count: 51
dm01cel05_megacli64-LdPdInfo_2013_11_21_13_33.out:Media Error Count: 51
dm01cel05_megacli64-PdList_long_2013_11_21_13_33.out:Media Error Count: 51
MegaSAS.log:                Error Counters
MegaSAS.log:Media Error Count: 51
MegaSAS.log:Media Error Count: 51
MegaSAS.log:Media Error Count: 51
MegaSAS.log:Media Error Count: 51
MegaSAS.log:                Error Counters
MegaSAS.log:Media Error Count: 51
MegaSAS.log:                Error Counters
MegaSAS.log:                Error Counters
[root@lunar sundiag_2013_11_21_13_33]# 


[root@lunar sundiag_2013_11_21_13_33]# grep "I/O error" * 
alert.log:Redo log write error 201 (Generic I/O error) on griddisk DATA_DM01_CD_00_dm01cel05: use of Flash Log for this device has been disabled
alert.log:Redo log write error 201 (Generic I/O error) on griddisk DATA_DM01_CD_00_dm01cel05: use of Flash Log for this device has been disabled
dm01cel05_dmesg_2013_11_21_13_33.out:end_request: I/O error, dev sdi, sector 15595520
dm01cel05_dmesg_2013_11_21_13_33.out:end_request: I/O error, dev sdi, sector 15595520
dm01cel05_dmesg_2013_11_21_13_33.out:end_request: I/O error, dev sdi, sector 15595520
dm01cel05_dmesg_2013_11_21_13_33.out:end_request: I/O error, dev sdi, sector 349304832
messages:Jun 29 21:44:18 dm01cel05 kernel: end_request: I/O error, dev sda, sector 66141200
messages:Jun 29 21:44:18 dm01cel05 kernel: end_request: I/O error, dev sda, sector 66139152
messages:Jun 29 21:44:18 dm01cel05 kernel: end_request: I/O error, dev sda, sector 66137104
messages:Jun 29 21:44:19 dm01cel05 kernel: end_request: I/O error, dev sda, sector 66143248
messages:Jun 29 21:44:19 dm01cel05 kernel: end_request: I/O error, dev sda, sector 1130092206
messages:Jun 29 21:44:19 dm01cel05 kernel: end_request: I/O error, dev sda, sector 1130092206
messages:Jun 29 21:44:19 dm01cel05 kernel: end_request: I/O error, dev sda, sector 1338480
messages:Jun 29 21:44:19 dm01cel05 kernel: end_request: I/O error, dev sda, sector 134319216

[root@lunar sundiag_2013_11_21_13_33]# grep "Unconfigured" * 
dm01cel05_megacli64-AdpAllInfo_2013_11_21_13_33.out:PR Correct Unconfigured Areas           : Yes
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:T5:  prCorrectUnconfiguredAreas=1, useFdeOnly=1 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:T5:  enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:09/29/13 23:40:27:   prCorrectUnconfiguredAreas=1, useFdeOnly=1 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:09/29/13 23:40:27:   enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,   autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:T5:  prCorrectUnconfiguredAreas=1, useFdeOnly=1 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:T5:  enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:10/11/13 22:58:47:   prCorrectUnconfiguredAreas=1, useFdeOnly=1 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:10/11/13 22:58:47:   enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,   autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:T5:  prCorrectUnconfiguredAreas=1, useFdeOnly=1 
dm01cel05_megacli64-FwTermLog_2013_11_21_13_33.out:T5:  enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
MegaSAS.log:PR Correct Unconfigured Areas           : Yes
MegaSAS.log:T5:         prCorrectUnconfiguredAreas=1, useFdeOnly=1 
MegaSAS.log:T5:         enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
MegaSAS.log:09/29/13 23:40:27:  prCorrectUnconfiguredAreas=1, useFdeOnly=1 
MegaSAS.log:09/29/13 23:40:27:  enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
MegaSAS.log:T5:         prCorrectUnconfiguredAreas=1, useFdeOnly=1 
MegaSAS.log:T5:         enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
MegaSAS.log:10/11/13 22:58:47:  prCorrectUnconfiguredAreas=1, useFdeOnly=1 
MegaSAS.log:10/11/13 22:58:47:  enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
MegaSAS.log:T5:         prCorrectUnconfiguredAreas=1, useFdeOnly=1 
MegaSAS.log:T5:         enableSpinDownUnconfigured=1,   disableSpinDownHS=0,    spinDownTime=1e,        autoEnhancedImport=0, enableSecretKeyControl=0, disableOnlineCtrlReset=0 
MegaSAS.log:PR Correct Unconfigured Areas           : Yes
MegaSAS.log:PR Correct Unconfigured Areas           : Yes
MegaSAS.log:PR Correct Unconfigured Areas           : Yes
[root@lunar sundiag_2013_11_21_13_33]# 

使用cellcli查看磁盘的错误信息:

cellcli -e list physicaldisk detail|grep err

list physicaldisk where disktype=harddisk and status like ".*predictive failure.*" 
list physicaldisk where DISKtYPE=hARDdISK and STATUS=CRITICAL detail

检查ASM的日志是否有类似如下的告警:
1. WARNING: failed to read mirror
2. IO Error

发表在 日常运维 | 标签为 , , | 留下评论

如何看待exadata的cell节点出现的writethrough/wirteback模式更换或者控制器充放电信息

Exadata使用的是LSI的disk driver,在定期进行的HC中,如果cell上出现类似下面的信息,需要考虑是否需要更换或者bug:

Hardware Alert 10_1

Event Time 2011-01-17T17:26:30+02:00
Description All Logical drives are in WriteThrough caching mode. Either battery is in a learn cycle or it needs to be replaced. Please contact Oracle Support

Affected Cell 
Name sba2cel12
Chassis Serial Number 103XXXXXXX
Version OSS_MAIN_LINUX.X64_100929

Recommended Action Battery is either in a learn cycle or it needs replacement. Please contact Oracle Support

这个信息意味着disk controller写cache的策略从”write-back” 更改为 “write-through” 了,原因是电池学习周期(battery learn cycle)正在进行。
这个学习周期一年回周期性的执行4次,这个操作主要是每次执行一次控制器电池的充电和放电(discharge and charge)操作。

在Image 11.2.1.3之前,每个月执行一次
从Image 11.2.1.3开始,每3个月执行一次: 每年的1月/4月/7月/10月 的17日凌晨2点

这个缺省的时间(下一次学习的时间)可以使用命令修改,例如: cellcli> alter cell bbuLearnCycleTime=”2013-01-22T02:00:00-08:00″

Oracle推荐所有cell磁盘的电源学习周期是同一个时间。

众所周知,Write-through 的性能比 write-back 差。但是当存储crash或者电源丢失(looses power)发生时,write back有丢数据的风险。
因此,在电池学习周期中,会自动将写策略从写回模式(write-back)修改为写模式(Write-through)

如果在cell 的alert上看到类似下面的信息:

Event Time 2011-01-17T17:26:30+02:00
Description All Logical drives are in WriteThrough caching mode. Either battery is in a learn cycle or it needs to be replaced. Please contact Oracle Support

Affected Cell 
Name sba2cel12
Chassis Serial Number 103XXXXXXX
Version OSS_MAIN_LINUX.X64_100929

需要连接到cell节点,查看一下电池充电的百分比:

# MegaCli64 -AdpBbuCmd -GetBbuStatus -a0




BBU status for Adapter: 0

BatteryType: iBBU08
Voltage: 3721 mV
Current: 541 mA
Temperature: 43 C

BBU Firmware Status:
Charging Status : Charging    ++++++++++++这里显示正在充电
Voltage : OK
Temperature : OK
Learn Cycle Requested : No
Learn Cycle Active : No
Learn Cycle Status : OK
Learn Cycle Timeout : No
I2c Errors Detected : No
Battery Pack Missing : No
Battery Replacement required : No
Remaining Capacity Low : Yes
Periodic Learn Required : No
Transparent Learn : No

Battery state:

GasGuageStatus:
Fully Discharged : No
Fully Charged : No
Discharging : No
Initialized : No
Remaining Time Alarm : Yes
Remaining Capacity Alarm: No
Discharge Terminated : No
Over Temperature : No
Charging Terminated : No
Over Charged : No

Relative State of Charge: 7 %
Charger System State: 1
Charger System Ctrl: 0
Charging current: 541 mA
Absolute state of charge: 0 %
Max Error: 0 %

Exit Code: 0x00

当充电完成后,可以在cell的alert上看到如下信息:

Hardware Alert 10_2

Event Time 2011-01-17T19:14:51+02:00

Description Battery is back to a good state

Affected Cell 
Name sba2cel12
Chassis Serial Number 103XXXXXXX
Version OSS_MAIN_LINUX.X64_100929

Recommended Action Battery is back to a good state. No Action Required

连接到cell节点,查看磁盘的写模式(writethrough/writeback)的状态,可以发现:

# MegaCli64 -LDInfo -Lall -aALL | grep 'Current Cache Policy'

Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
...

同样在

[root@dm02db01 ~]# dcli -g cell_group -l root "cellcli -e list alerthistory"
dm02cel01: 15_1  2013-10-17T02:00:01+08:00       info    "The disk controller battery is executing a learn cycle and may temporarily enter WriteThrough Caching mode as part of the learn cycle. Disk write throughput might be temporarily lower during this time. The flash drives are not affected. The battery learn cycle is a normal maintenance activity that occurs quarterly and runs for approximately 1 to 12 hours.  Note that many learn cycles do not require entering WriteThrough caching mode.  When the disk controller cache returns to the normal WriteBack caching mode, an additional informational alert will be sent.  Battery Serial Number : 6198  Battery Type          : iBBU08  Battery Temperature   : 42 C  Full Charge Capacity  : 1303 mAh  Relative Charge       : 98 %  Ambient Temperature   : 25 C"


dm02cel01: 15_2  2013-10-17T07:33:21+08:00       clear   "All disk drives are in WriteBack caching mode.  Battery Serial Number : 6198  Battery Type          : iBBU08  Battery Temperature   : 47 C  Full Charge Capacity  : 1303 mAh  Relative Charge       : 53 %  Ambient Temperature   : 25 C"


。。。。。。。。

上面信息显示了10月17日凌晨:02:00cell01上有一个逻辑盘开始学习,完成时间是10月17日早上7:33。充电完成后,磁盘驱动器已经改回了writeback模式。

通常电池充电(Learning state)可能需要几个小时,如果充电完成后没有自动改回wirteback模式,可能是控制器电源出现问题,需要联系support

可以使用下面方法查看电池的学习周期:
/usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root ‘cellcli -e list cell attributes bbuLearnCycleTime’

正常的结果如下:

root@exadb01# /usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root 'cellcli -e list cell attributes bbuLearnCycleTime'

exacel01: 2012-04-17T02:00:00-04:00
exacel02: 2012-04-17T02:00:00-04:00
exacel03: 2012-04-17T02:00:00-04:00  
exacel04: 2012-04-17T02:00:00-04:00
exacel05: 2012-04-17T02:00:00-04:00
exacel06: 2012-04-17T02:00:00-04:00
exacel07: 2012-04-17T02:00:00-04:00
exacel08: 2012-04-17T02:00:00-04:00
exacel09: 2012-04-17T02:00:00-04:00
exacel10: 2012-04-17T02:00:00-04:00
exacel11: 2012-04-17T02:00:00-04:00
exacel12: 2012-04-17T02:00:00-04:00
exacel13: 2012-04-17T02:00:00-04:00
exacel14: 2012-04-17T02:00:00-04:00

可以看到14个cell的信息是一致的,再db节点也可以看到类似的信息:

# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -aAll
BBU status for Adapter: 0
....
Learn Cycle Requested                      : No   
.............
GasGuageStatus:
...................
  Remaining Time Alarm    : No
  Discharge Terminated    : No
...................

# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -BbuLearn -a0
Adapter 0: BBU Learn Succeeded.

如果出现不一致的信息或者异常的输出,例如:

root@exadb01# /usr/local/bin/dcli -g /opt/oracle.SupportTools/onecommand/cell_group -l root 'cellcli -e list cell attributes bbuLearnCycleTime'

exacel01: 2012-04-17T02:00:00+01:00
exacel02: 2012-04-17T02:00:00+01:00
exacel03: 2012-10-17T02:00:00+01:00  
exacel04: 2012-07-17T02:00:00+01:00
exacel05: 2012-04-17T02:00:00+01:00
exacel06: 2012-07-17T02:00:00+01:00
exacel07: 2012-07-17T02:00:00+01:00
exacel08: 2012-07-17T02:00:00+01:00
exacel09: 2012-07-17T02:00:00+01:00
exacel10: 2012-07-17T02:00:00+01:00
exacel11: 2012-07-17T02:00:00+01:00
exacel12: 2012-07-17T02:00:00+01:00
exacel13: 2012-07-17T02:00:00+01:00
exacel14: 2012-07-17T02:00:00+01:00

上面的电源学习信息不一致。
同样,在db节点执行下面的命令,也可以看到异常信息:

# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -GetBbuStatus -aAll
BBU status for Adapter: 0
....
  Learn Cycle Requested        : Yes   <<<<
.............
GasGuageStatus:
...................
  Initialized                  : Yes  
  Remaining Time Alarm         : Yes
...................


# /opt/MegaRAID/MegaCli/MegaCli64 -AdpBbuCmd -BbuLearn -a0
Adapter 0: BBU Learn Failed   <<<<<

同时,sundiag中也会有类似的信息(/opt/MegaRAID/MegaCli/MegaCli64 -adpbbucmd -aALL):

Relative State of Charge: 27 % 
Absolute State of charge: 23 % 
Remaining Capacity: 356 mAh <------- 
Full Charge Capacity: 1365 mAh 

这是个电池生命周期的bug:

 Bug 15788039: SUNBT7164516 BBU LEARN CYCLES NO LONGER RUN ON NIWOT CARD.
 This will be fixed in LSI firmware 12.12.0-0147.

Workround: reboot cell节点和db节点,让他重新学习和计算电池生命周期,防止电池耗尽

发表在 体系架构, 安装和升级, 硬件配置 | 标签为 , , , | 留下评论

如何查看你的环境是否是RAC环境? 如何判断你有哪些option?如何enable或者disable他们?

前几天一个老同事问我,客户不想买RAC 的license了,怎么办?
因为当时他们有其他机器安装新环境,因此,我当时就说,直接装一个单机库,把数据库迁移过去,cluster_database改成false,再清理掉thread,undo,redo就ok了。。。

今天忽然想起来,如果客户不买partition选项了,想关闭这个怎么办?或者客户没有新机器再装一个ORACLE_HOME了,怎么办?
后面的我们就研究下:
首先我们可以使用OUI或者opatch去看已经安装了哪些选项(当然,还可以看数据库视图)

方法1: 使用OUI去review
./runInstaller
里面有一个 “Installed Products”,这个是你已经安装的产品

方法2:使用OPATCH

[oracle@lunar lib]$ opatch lsinventory -detail
Invoking OPatch 11.2.0.1.7

Oracle Interim Patch Installer version 11.2.0.1.7
Copyright (c) 2011, Oracle Corporation. All rights reserved.

Oracle Home : /u01/app/oracle/product/11.2.0.3/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /etc/oraInst.loc
OPatch version : 11.2.0.1.7
OUI version : 11.2.0.3.0
Log file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/opatch2013-11-12_05-43-12AM.log

Lsinventory Output file location : /u01/app/oracle/product/11.2.0.3/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2013-11-12_05-43-12AM.txt

——————————————————————————–
Installed Top-level Products (1):

Oracle Database 11g 11.2.0.3.0
There are 1 products installed in this Oracle Home.

Installed Products (136):

Agent Required Support Files 10.2.0.4.3
Assistant Common Files 11.2.0.3.0
Bali Share 1.1.18.0.0
Buildtools Common Files 11.2.0.3.0
Character Set Migration Utility 11.2.0.3.0
Cluster Verification Utility Common Files 11.2.0.3.0
Database Configuration and Upgrade Assistants 11.2.0.3.0
Database SQL Scripts 11.2.0.3.0
Database Workspace Manager 11.2.0.3.0
Deinstallation Tool 11.2.0.3.0
Enterprise Edition Options 11.2.0.3.0
Enterprise Manager Agent 10.2.0.4.3
Enterprise Manager Agent Core Files 10.2.0.4.4
Enterprise Manager Common Core Files 10.2.0.4.4
Enterprise Manager Common Files 10.2.0.4.3

。。。。。。。。。。。。。。。。。。。。。

Oracle中,option的enable或者disable的信息包含在 libknlopt.a 文件中(archive file),因此,可以查看这个文件是否包含了相应的对象文件(“.o”文件)来判断某个功能是enable状态,还是disable状态:
第一个环境,12.1的Standalone,查看下是否包含了ASM选件:

[root@lunar ~]# su – grid
[grid@lunar lib]$ env|grep ORA
ORACLE_SID=+ASM
ORACLE_BASE=/u01/app/grid
ORACLE_HOME=/u01/app/12.1/grid
[grid@lunar lib]$
[grid@lunar ~]$ cd $ORACLE_HOME/rdbms/lib
[grid@lunar lib]$ ar -t libknlopt.a | grep -c kfon.o
1
[grid@lunar lib]$

可见,这里已经enable了ASM

第二个环境,是11.2的单机文件系统数据库:

[oracle@lunar lib]$ ar -t libknlopt.a | grep -c kfon.o
0
[oracle@lunar lib]$ env|grep ORA
ORACLE_SID=bb
ORACLE_BASE=/u01/app/oracle
ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
[oracle@lunar lib]$ pwd
/u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib
[oracle@lunar lib]$
[oracle@lunar lib]$
显然,我这个环境没有ASM的动态库。

另外,之前大家一起讨论过关于RAC和单机上,软件的区别,这里你可以看到,enable RAC时,你应该有kcsm.o文件,而disable RAC选件的时候,你使用了 ksnkcs.o文件
我这个是12.1的Standalone环境,自然应该ASM option被安装,而RAC是没有被安装的:

[grid@lunar lib]$ ar -t libknlopt.a | grep -c kcsm.o —没有RAC选件
0
[grid@lunar lib]$ ar -t libknlopt.a | grep -c ksnkcs.o —–已经安装了ASM选件
1
[grid@lunar lib]$

下面列出一些常用OPTION对应的动态库的列表:

Oracle Partitioning kkpoban.o
Database Vault kzvidv.o
ASM kfon.o
Real Application Cluster kcsm.o
RAT kecwr.o

这里只是说是否安装了,如果查看是否运行了RAC,那很简单,只需要查看LMON进程: ps -ef|grep lmon

如果已经安装了一些选件,但是比如renew等问题,你需要disable某些选件,可以使用下面两种方法:
方法1:

+++++++++enable partition选项:
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk part_on ioracle

+++++++++Disable partition选项:
——-
$ cd $ORACLE_HOME/rdbms/lib
$ make -f ins_rdbms.mk part_off ioracle

+++++++enable RAC:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_on ioracle

+++++++disable RAC:
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk rac_ff ioracle

方法2:从11.2开始,可以使用chopt 命令完成,例如:
+++++++++enable partition选项:
$ chopt enable partitioning

+++++++++Disable partition选项:
$ chopt disable partitioning

先把数据库关闭,然后disable partition选项:

[oracle@lunar lsinv]$ cd $ORACLE_HOME/rdbms/lib
[oracle@lunar lib]$ ar -t libknlopt.a | grep -c kkpoban.o
1
[oracle@lunar lib]$
[oracle@lunar lib]$ chopt disable partitioning

Writing to /u01/app/oracle/product/11.2.0.3/dbhome_1/install/disable_partitioning.log…
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk part_off ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1

[oracle@lunar lib]$ ar -t libknlopt.a | grep -c kkpoban.o
0 +++++++++++=这里看到,已经没有partition选项了
[oracle@lunar lib]$

这是因为,它使用ar d选项将这个动态库从libknlopt.a 归档文件中把目标分区选项的文件 kkpoban.o 给删除了,详细日志如下:

[oracle@lunar ~]$ tail -f /u01/app/oracle/product/11.2.0.3/dbhome_1/install/disable_partitioning.log
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk part_off ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
/usr/bin/ar d /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a kkpoban.o
/usr/bin/ar cr /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ksnkkpo.o
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
chmod 755 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin

– Linking Oracle
rm -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/oracle
gcc -o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/oracle -m64 -L/u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ -L/u01/app/oracle/product/11.2.0.3/dbhome_1/lib/stubs/ -Wl,-E /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ttcsoi.o -Wl,–whole-archive -lperfsrv11 -Wl,–no-whole-archive /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/nautab.o /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/naeet.o /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/naect.o /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/naedhs.o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/config.o -lserver11 -lodm11 -lcell11 -lnnet11 -lskgxp11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo “-loraolap11” ; fi` -lslax11 -lpls11 -lrt -lplp11 -lserver11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 `if [ -f /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libavserver11.a ] ; then echo “-lavserver11” ; else echo “-lavstub11”; fi` `if [ -f /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libavclient11.a ] ; then echo “-lavclient11” ; fi` -lknlopt -lslax11 -lpls11 -lrt -lplp11 -ljavavm11 -lserver11 -lwwg `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lmm -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lztkg11 `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `if /usr/bin/ar tv /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a | grep “kxmnsd.o” > /dev/null 2>&1 ; then echo ” ” ; else echo “-lordsdo11”; fi` -L/u01/app/oracle/product/11.2.0.3/dbhome_1/ctx/lib/ -lctxc11 -lctx11 -lzx11 -lgx11 -lctx11 -lzx11 -lgx11 -lordimt11 -lclsra11 -ldbcfg11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -locr11 -locrb11 -locrutl11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged -lippsmerged -lippcore -lippcpemerged -lippcpmerged -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lsnls11 -lunls11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lasmclnt11 -lcommon11 -lcore11 -laio `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/11.2.0.3/dbhome_1/lib -lm `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/sysliblist` -ldl -lm -L/u01/app/oracle/product/11.2.0.3/dbhome_1/lib
test ! -f /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle ||\
mv -f /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracleO
mv /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle

再折腾回去吧:

[oracle@lunar lib]$ time chopt enable partitioning

Writing to /u01/app/oracle/product/11.2.0.3/dbhome_1/install/enable_partitioning.log…
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk part_on ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1

real 2m31.104s
user 0m7.656s
sys 0m9.109s
[oracle@lunar lib]$
[oracle@lunar lib]$ cat /u01/app/oracle/product/11.2.0.3/dbhome_1/install/enable_partitioning.log
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk part_on ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
/usr/bin/ar d /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a ksnkkpo.o
/usr/bin/ar cr /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/kkpoban.o
/usr/bin/make -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ins_rdbms.mk ioracle ORACLE_HOME=/u01/app/oracle/product/11.2.0.3/dbhome_1
chmod 755 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin

– Linking Oracle
rm -f /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/oracle
gcc -o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/oracle -m64 -L/u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ -L/u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ -L/u01/app/oracle/product/11.2.0.3/dbhome_1/lib/stubs/ -Wl,-E /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/opimai.o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/ttcsoi.o -Wl,–whole-archive -lperfsrv11 -Wl,–no-whole-archive /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/nautab.o /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/naeet.o /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/naect.o /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/naedhs.o /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/config.o -lserver11 -lodm11 -lcell11 -lnnet11 -lskgxp11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo “-loraolap11” ; fi` -lslax11 -lpls11 -lrt -lplp11 -lserver11 -lclient11 -lvsn11 -lcommon11 -lgeneric11 `if [ -f /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libavserver11.a ] ; then echo “-lavserver11” ; else echo “-lavstub11”; fi` `if [ -f /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/libavclient11.a ] ; then echo “-lavclient11” ; fi` -lknlopt -lslax11 -lpls11 -lrt -lplp11 -ljavavm11 -lserver11 -lwwg `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lmm -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lztkg11 `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnro11 `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/ldflags` -lncrypt11 -lnsgr11 -lnzjs11 -ln11 -lnl11 -lnnz11 -lzt11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 `if /usr/bin/ar tv /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/libknlopt.a | grep “kxmnsd.o” > /dev/null 2>&1 ; then echo ” ” ; else echo “-lordsdo11”; fi` -L/u01/app/oracle/product/11.2.0.3/dbhome_1/ctx/lib/ -lctxc11 -lctx11 -lzx11 -lgx11 -lctx11 -lzx11 -lgx11 -lordimt11 -lclsra11 -ldbcfg11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -locr11 -locrb11 -locrutl11 -lhasgen11 -lskgxn2 -lnnz11 -lzt11 -lxml11 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged -lippsmerged -lippcore -lippcpemerged -lippcpmerged -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lsnls11 -lunls11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lcore11 -lsnls11 -lnls11 -lxml11 -lcore11 -lunls11 -lsnls11 -lnls11 -lcore11 -lnls11 -lasmclnt11 -lcommon11 -lcore11 -laio `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/11.2.0.3/dbhome_1/lib -lm `cat /u01/app/oracle/product/11.2.0.3/dbhome_1/lib/sysliblist` -ldl -lm -L/u01/app/oracle/product/11.2.0.3/dbhome_1/lib
test ! -f /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle ||\
mv -f /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracleO
mv /u01/app/oracle/product/11.2.0.3/dbhome_1/rdbms/lib/oracle /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle
chmod 6751 /u01/app/oracle/product/11.2.0.3/dbhome_1/bin/oracle
[oracle@lunar lib]$

注意最后一行,总有人弄不清楚oracle的执行权限, 6751,记住了吧,O(∩_∩)O哈哈~

发表在 FAQ, RAC | 标签为 , , | 留下评论

无法解释的ORA-12537

今天忽然想看下12c的一个小东东,结果遇到ORA-12537:
我这个VM当初装的很别扭,前一段又折腾了一下,更加别扭了,主要问题如下:
1,初始加盘的时候整的太小了,只给了12G,结果装了grid后,再装oracle软件就很困难,这里grid的所在的盘mount在/u01这个目录下
2,然后增加了一块盘,结果没吸取教训,继续折腾太小了,还是12G,不过12c可以装上玩了,这个oracle的软件所在的盘mount在 /u01/app/oracle目录下
3,前一段时间觉得磁盘空间不够了,于是把一个11.2的vm的软件使用root用户tar过来,解压后,ORACLE_BASE和ORACLE_HOME目录是:/u01/app/oracle(这个跟12c的oracle软件是同一个ORACLE_BASE)和/u01/app/oracle/product/11.2.0.3/dbhome_1
够乱了吧,O(∩_∩)O哈哈~


[oracle@lunar admin]$ sqlplus sys/oracle@lunarbb as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 11 20:57:10 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

ERROR:
ORA-12537: TNS:connection closed


Enter user-name: ^C
[oracle@lunar admin]$ 

检查listener.log:


<msg time='2013-11-11T21:02:56.404+08:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='lunar'
 host_addr='127.0.0.1'>
 <txt>11-NOV-2013 21:02:56 * service_update * lunarbb * 0
 </txt>
</msg>
<msg time='2013-11-11T21:02:56.672+08:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='lunar'
 host_addr='127.0.0.1'>
 <txt>11-NOV-2013 21:02:56 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=lunarbb)(CID=(PROGRAM=sqlplus@lunar)(HOST=lunar)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.56.129)(PORT=50873)) * establish * lunarbb * 12518
 </txt>
</msg>
<msg time='2013-11-11T21:02:56.787+08:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='lunar'
 host_addr='127.0.0.1'>
 <txt>TNS-12518: TNS:listener could not hand off client connection
 TNS-12547: TNS:lost contact
  TNS-12560: TNS:protocol adapter error
   TNS-00517: Lost contact
    Linux Error: 32: Broken pipe
 </txt>
</msg>
<msg time='2013-11-11T21:03:01.752+08:00' org_id='oracle' comp_id='tnslsnr'
 type='UNKNOWN' level='16' host_id='lunar'
 host_addr='127.0.0.1'>
 <txt>11-NOV-2013 21:03:01 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=lunar)(USER=grid))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=202375424)) * status * 0
 </txt>
</msg>

发现报错:TNS-12518: TNS:listener could not hand off client connection
于是google,mos,设置一堆乱七八糟参数,并设置了trace:


[oracle@lunar admin]$ vi sqlnet.ora 
NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT)
SQLNET.ALLOWED_LOGON_VERSION = 10
SQLNET.INBOUND_CONNECT_TIMEOUT=180
SQLNET_ALLOWED_LOGON_VERSIONS=(12,11,10,9,8,7)
#tcp.validnode_checking=false
DIAG_ADR_ENABLED=on
ADR_BASE=/u01/app/oracle/product/12.1/dbhome_1/network/trace
TRACE_LEVEL_CLIENT=16
TRACE_LEVEL_SERVER=16

trace中最后出问题的信息如下,貌似是某些文件找不到或者权限问题:


[11-NOV-2013 21:30:30:931] nrigbni: entry
[11-NOV-2013 21:30:30:931] nrigbni: Unable to get data from navigation file tnsnav.ora
[11-NOV-2013 21:30:30:931] nrigbni: exit
[11-NOV-2013 21:30:30:931] nrigbi: exit
[11-NOV-2013 21:30:30:931] nigini: exit
[11-NOV-2013 21:30:30:931] nszgwop: entry
[11-NOV-2013 21:30:30:931] nszgwop: SQLNET.WALLET_OVERRIDE not found, using default.
[11-NOV-2013 21:30:30:931] nszgwop: exit

使用strace sqlplus sys/oracle@lunarbb as sysdba进行跟踪,发现了如下可以信息:
貌似写什么东西时报错了


                                                           lseek(11, 248320, SEEK_SET)             = 248320
read(11, "\r\0\3710\0\0V\0\3720\0\0k\0\3730\0\0\207\0\3740\0\0\245\0\3750\0\0\314\0"..., 512) = 512
close(11)                               = 0
lseek(4, 5120, SEEK_SET)                = 5120
read(4, "\r\0\351\0\0\0V\0\352\0\0\0\220\0\353\0\0\0\240\0\356\0\0\0\320\0\357\0\0\0\344\0"..., 512) = 512
write(1, "ERROR:\n", 7ERROR:
)                 = 7
write(1, "ORA-12537: TNS:connection closed"..., 33ORA-12537: TNS:connection closed
) = 33
write(1, "\n", 1
)                       = 1
write(1, "\n", 1
)                       = 1
write(1, "Enter user-name: ", 17Enter user-name: )       = 17
fstat(0, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 2), ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f922d7c5000
read(0, 

[root@lunar ~]# cd /proc/1/fd
[root@lunar fd]# ll
total 0
lrwx------ 1 root root 64 Nov 11 22:21 10 -> /dev/initctl
[root@lunar fd]# 

MOS了一下,Troubleshooting ORA-12537 / TNS-12537 TNS:Connection Closed (Doc ID 555609.1)
发现,我的这个文件没啥问题,权限都对:


[oracle@lunar ~]$ ll $ORACLE_HOME/bin/oracle
-rwxr-s--x 1 oracle asmadmin 293719944 Jun 16 11:43 /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
[oracle@lunar ~]$

<strong>于是,瞎折腾了一下:</strong>
[root@lunar ~]# chown oracle:oinstall /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
[root@lunar ~]#

<strong>重启下ORACLE还是不行:</strong>
[oracle@lunar ~]$ ss

SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 11 22:45:11 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected.
SYS@lunarbb>shutdown abort
ORACLE instance shut down.
SYS@lunarbb>exit
Disconnected
[oracle@lunar ~]$ ss

SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 11 22:45:25 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@lunarbb>startup
ORACLE instance started.

Total System Global Area  346562560 bytes
Fixed Size                  2288240 bytes
Variable Size             297797008 bytes
Database Buffers           37748736 bytes
Redo Buffers                8728576 bytes
Database mounted.
Database opened.
SYS@lunarbb>conn sys/oracle@lunarbb as sysdba
ERROR:
ORA-12537: TNS:connection closed


Warning: You are no longer connected to ORACLE.
SYS@lunarbb>

这时候,看见刚刚修改过的oracle文件权限不对了,再重新修改回去:


[root@lunar ~]# chown oracle:asmadmin /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
[root@lunar ~]# ll /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
-rwxr-x--x 1 oracle asmadmin 293719944 Jun 16 11:43 /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
[root@lunar ~]#
[root@lunar ~]# chmod 6751 /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
[root@lunar ~]# ll /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
-rwsr-s--x 1 oracle asmadmin 293719944 Jun 16 11:43 /u01/app/oracle/product/12.1/dbhome_1/bin/oracle
[root@lunar ~]# 

重启下ORACLE,再测试,居然好了,O(∩_∩)O哈哈~:


[oracle@lunar ~]$ ss

SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 11 22:47:21 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected.
SYS@lunarbb>shutdown abort
ORACLE instance shut down.
SYS@lunarbb>exit   
Disconnected
[oracle@lunar ~]$ ss

SQL*Plus: Release 12.1.0.1.0 Production on Mon Nov 11 22:47:35 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to an idle instance.

SYS@lunarbb>startup
ORACLE instance started.

Total System Global Area  346562560 bytes
Fixed Size                  2288240 bytes
Variable Size             297797008 bytes
Database Buffers           37748736 bytes
Redo Buffers                8728576 bytes
Database mounted.
Database opened.
SYS@lunarbb>conn sys/oracle@lunarbb as sysdba
Connected.
SYS@lunarbb>

发表在 network, ORA-XXXXX, ORACLE 12C | 标签为 , | 留下评论

随心所欲的指定RAC中的节点号

考虑到节点逐出的规则,其中有一个跟节点号有关系,即缺省节点号小的被保留,大的被逐出(还有很多其他条件,比如分组等,这里不细说)
那天群里有人说了希望修改节点号的需求,今天忽然想起来试试看,结论如下:
1,可以使用ocrpatch任意指定任一个节点的节点号
2,不指定的情况,安装的节点为节点1,其余的顺次往下排

备份下当前OCR和VOT的信息:

[root@RAC1 ~]# olsnodes -s -t -n
rac1    1       Active  Unpinned
rac2    2       Active  Unpinned
[root@RAC1 ~]# 

[root@RAC1 ~]# ocrconfig -showbackup

rac1     2013/11/01 19:37:35     /u01/11.2.0/grid/cdata/racdb/backup00.ocr

rac1     2013/11/01 15:37:33     /u01/11.2.0/grid/cdata/racdb/backup01.ocr

rac1     2013/11/01 15:37:33     /u01/11.2.0/grid/cdata/racdb/day.ocr

rac1     2013/11/01 15:37:33     /u01/11.2.0/grid/cdata/racdb/week.ocr
PROT-25: Manual backups for the Oracle Cluster Registry are not available
[root@RAC1 ~]# 


这里,我们可以看见,节点1(rac1)的节点号是1,节点2(rac2)的节点号是2。。。
我打算把它修改为节点1(rac1)的节点号是2,节点2(rac2)的节点号是1


[root@RAC1 ~]#  crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   983f5a2d804d4f81bfddc68e5bcf6e65 (/dev/asm-diskc) [DATA]
Located 1 voting disk(s).
[root@RAC1 ~]# 

只读模式使用ocrpatch:


[root@RAC1 tmp]# ./ocrpatch.bin
OCR/OLR Patch Tool Version 11.2 (20100325)
Oracle Clusterware Release 11.2.0.3.0
Copyright (c) 2005, 2013, Oracle.  All rights reserved.

Initialization - please wait

[WARN] local clusterware stack is running

[INFO] OCR checked out OK

DISCLAIMER: 
  * USE OCRPATCH AT YOUR OWN RISK.
  * TAKE OCR BACKUP BEFORE MAKING ANY CHANGES.
  * ENSURE THAT ORACLE CLUSTERWARE IS NOT RUNNING ON ANY CLUSTER NODE
  * FAILURE TO PATCH OCR PROPERLY MAY CAUSE FAILURE TO START THE
    CLUSTERWARE STACK.

OCR device information:
 open mode ......... : READ-ONLY
  device 0 ......... : OPEN, +DATA
  device 1 ......... : NOT_CONFIGURED, N/A
  device 2 ......... : NOT_CONFIGURED, N/A
  device 3 ......... : NOT_CONFIGURED, N/A
  device 4 ......... : NOT_CONFIGURED, N/A
 selected device(s)  : ANY

[INFO] operating in READ-ONLY mode
[INFO] Certain functionality is disabled in this mode
ocrpatch> 

好了,现在我们来修改下
再开2个会话,分别用于停止节点1和节点2的crs:


[root@RAC1 ~]# crsctl stop crs
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.crsd' on 'rac1'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER_DG.lsnr' on 'rac1'
CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1'
CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1'
CRS-2673: Attempting to stop 'ora.racdb.db' on 'rac1'
CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER_DG.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.r-dg1-vip.vip' on 'rac1'
CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1'
CRS-2677: Stop of 'ora.r-dg1-vip.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.r-dg1-vip.vip' on 'rac2'
CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2'
CRS-2677: Stop of 'ora.racdb.db' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ASMDATA.dg' on 'rac1'
CRS-2676: Start of 'ora.r-dg1-vip.vip' on 'rac2' succeeded
CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded
CRS-2677: Stop of 'ora.ASMDATA.dg' on 'rac1' succeeded
CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded
CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ons' on 'rac1'
CRS-2673: Attempting to stop 'ora.net2.network' on 'rac1'
CRS-2677: Stop of 'ora.net2.network' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1'
CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed
CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.evmd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded


注意这里,节点1,貌似hang住了。。
节点2已经clear shutdown了

于是想起来了,还有一个ocrpatch的窗口,退出后,大概几秒钟,继续shutdown:


ocrpatch> quit
[OK] Exiting due to user request ...
[root@RAC1 tmp]# 


CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.crf' on 'rac1'
CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@RAC1 ~]# 
[root@RAC1 ~]# 


在节点1以独占模式启动cluster:


[root@RAC1 tmp]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
[root@RAC1 tmp]# 
[root@RAC1 tmp]# crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     Started             
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1                                         
ora.crf
      1        OFFLINE OFFLINE                                                   
ora.crsd
      1        OFFLINE OFFLINE                                                   
ora.cssd
      1        ONLINE  ONLINE       rac1                                         
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1                                         
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0            
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.evmd
      1        OFFLINE OFFLINE                                                   
ora.gipcd
      1        ONLINE  ONLINE       rac1                                         
ora.gpnpd
      1        ONLINE  ONLINE       rac1                                         
ora.mdnsd
      1        ONLINE  ONLINE       rac1                                         
[root@RAC1 tmp]# 


把voting disk放到文件系统上:


[root@RAC1 tmp]# crsctl replace votedisk /tmp/vote.dbf
Now formatting voting disk: /tmp/vote.dbf.
CRS-4256: Updating the profile
Successful addition of voting disk 8653b0664b074fdebf5f64b7e7ad539b.
Successful deletion of voting disk 983f5a2d804d4f81bfddc68e5bcf6e65.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
[root@RAC1 tmp]# 


停止crs:
[root@RAC1 tmp]# crsctl stop crs -f
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@RAC1 tmp]# ps -ef|grep d.bin
grid     18092     1  0 20:15 ?        00:00:01 /u01/11.2.0/grid/bin/oraagent.bin
root     18125     1  0 20:15 ?        00:00:00 /u01/11.2.0/grid/bin/cssdmonitor
root     18146     1  0 20:15 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent
root     18208     1  1 20:16 ?        00:00:01 /u01/11.2.0/grid/bin/orarootagent.bin
root     18498 15161  0 20:18 pts/3    00:00:00 grep d.bin
[root@RAC1 tmp]# 


[root@RAC1 tmp]# ls -lrt /tmp/vote.dbf
-rw-r----- 1 grid oinstall 21004800 Nov  1 20:18 /tmp/vote.dbf
[root@RAC1 tmp]# ls -lrt /home/grid/vote.bak 
-rw-r----- 1 root root 21004800 Nov  1 20:19 /home/grid/vote.bak
[root@RAC1 tmp]# 


以write read方式访问ocr:


[root@RAC1 tmp]# ./ocrpatch -l -u
OCR/OLR Patch Tool Version 11.2 (20100325)
Oracle Clusterware Release 11.2.0.3.0
Copyright (c) 2005, 2013, Oracle.  All rights reserved.

Initialization - please wait

[INFO] OLR checked out OK

DISCLAIMER: 
  * USE OCRPATCH AT YOUR OWN RISK.
  * TAKE OLR BACKUP BEFORE MAKING ANY CHANGES.
  * ENSURE THAT ORACLE CLUSTERWARE IS NOT RUNNING ON ANY CLUSTER NODE
  * FAILURE TO PATCH OLR PROPERLY MAY CAUSE FAILURE TO START THE
    CLUSTERWARE STACK.

OLR device information:
 open mode ......... : READ-WRITE
  device 0 ......... : OPEN, /u01/11.2.0/grid/cdata/rac1.olr

ocrpatch> 

看下主要功能:
ocrpatch> h

Usage: ocrpatch [-u] [-l] || [-v] || [-b <backupfile>]
 ocrpatch                   open OCR in read-only mode
 ocrpatch -u                         in read-write mode
 ocrpatch -l                open OLR in read-only mode
 ocrpatch -u -l                      in read-write mode
 ocrpatch -b <backupfile>   open OCR backup file in read-only mode
 ocrpatch -v                show ocrpatch version information

KEY operations
  gv <key>                      get key value
  ek <key>                      enumerate subkeys for key
  gks <key>                     get key security attributes
  sv <key> <dtype> <value>      set key value
     datatype: (u)b4|(o)ratext|
               (b)ytestream|(ubi)g_ora
  ck <key>.<subkey>                     create key.subkey
  ckv <key>.<subkey> <dtype> <val>      create key.subkey + setval
     datatype: (u)b4|(o)ratext|
               (b)ytestream|(ubi)g_ora
  mk <srckey> <tgtkey>          move srckey to tgtkey.subkey
  dv <key>                      delete key value
  dk <key>                      delete key
  dkr <key>                     delete key and all subkeys
  sku <key> <username> [<group>]        set key username[+group]
  skp <key> <realm> <permission>        set key permissions
      realm: (u)ser|(g)roup|(o)ther 
      perm:  (n)one|(q)uery_key|(e)numerate_sub_keys|
             (r)ead|(s)et_key|(create_l)ink|(create_s)ub_key|
             (d)elete_key|(a)ll_access
  sb                            start/init batch
  eb                            execute batch
  tb                            terminate batch
BLOCK operations
  rb <block#>|<key>             read block by block# / key name
  dn                            display native block from offset
  di                            display interpreted block
  du                            display 4k block, native mode
  of <offset>                   set offset in block, range 0-4095
  fs <string>                   find pattern
  ms <string>                   modify buffer at block/offset
  wb                            write modified block
  set <parameter>               set parameters
      parameter: (m)ode    switch between HEX and CHAR
                 (f)ind    switch between FORWARD and BACKWARD
MISC operations
  setenv <envvar> <value>       set environment variable value
  unsetenv <envvar>             unset environment variable value
  getenv <envvar>               get environment variable value
  spool on|off                  set SPOOL on|off
  ocrdmp                        dump all OLR context
  i                             show parameters, version, info
  h                             this help screen
  exit / quit                   exit ocrpatch

All commands support spooling the output to trace file when spool is ON.
Commands that attempt or perform a modification are always logged.

ocrpatch> 


SYSTEM.css.nodenum_hint ,这个表示他们的 “preferred” node number ,这个是节点1,我们看到设置为1,现在,我们把它设置为2,然后观察下:


ocrpatch> gv SYSTEM.css.nodenum_hint
[OK] Read key <SYSTEM.css.nodenum_hint>, 
     type=1 (UB4), size=4, value=1
ocrpatch> 

ocrpatch> sv SYSTEM.css.nodenum_hint u 2
[OK] Read key <SYSTEM.css.nodenum_hint>, 
     type=1 (UB4), size=4, value=1
[OK] Deleted value for key <SYSTEM.css.nodenum_hint>
[OK] Set value for key <SYSTEM.css.nodenum_hint>
[OK] Read key <SYSTEM.css.nodenum_hint>, 
     type=1 (UB4), size=4, value=2
ocrpatch> 
ocrpatch> gv SYSTEM.css.nodenum_hint
[OK] Read key <SYSTEM.css.nodenum_hint>, 
     type=1 (UB4), size=4, value=2
ocrpatch> 


已经修改成功了。

ocrpatch> exit
[OK] Exiting due to user request …
[root@RAC1 tmp]#

现在,使用独占模式启动crs:


[root@RAC1 tmp]# crsctl start crs -excl -nocrs
CRS-4123: Oracle High Availability Services has been started.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1'
CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1'
CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac1'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac1'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac1'
CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded
CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2672: Attempting to start 'ora.ctssd' on 'rac1'
CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'rac1'
CRS-2676: Start of 'ora.asm' on 'rac1' succeeded
[root@RAC1 tmp]# 
[root@RAC1 tmp]# ps -ef|grep d.bin
root     18804     1  3 20:32 ?        00:00:02 /u01/11.2.0/grid/bin/ohasd.bin exclusive
grid     18918     1  0 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/oraagent.bin
grid     18930     1  0 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin
grid     18940     1  0 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/gpnpd.bin
root     18951     1  0 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/cssdmonitor
grid     18953     1  0 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/gipcd.bin
root     18974     1  0 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/cssdagent
grid     18998     1  1 20:32 ?        00:00:00 /u01/11.2.0/grid/bin/ocssd.bin -X
root     19029     1  2 20:33 ?        00:00:00 /u01/11.2.0/grid/bin/orarootagent.bin
root     19042     1  0 20:33 ?        00:00:00 /u01/11.2.0/grid/bin/octssd.bin
root     19185 15161  0 20:33 pts/3    00:00:00 grep d.bin
[root@RAC1 tmp]#


检查状态,都正常:


[root@RAC1 tmp]# crsctl status res -t -init
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
      1        ONLINE  ONLINE       rac1                     Started             
ora.cluster_interconnect.haip
      1        ONLINE  ONLINE       rac1                                         
ora.crf
      1        OFFLINE OFFLINE                                                   
ora.crsd
      1        OFFLINE OFFLINE                                                   
ora.cssd
      1        ONLINE  ONLINE       rac1                                         
ora.cssdmonitor
      1        ONLINE  ONLINE       rac1                                         
ora.ctssd
      1        ONLINE  ONLINE       rac1                     ACTIVE:0            
ora.diskmon
      1        OFFLINE OFFLINE                                                   
ora.evmd
      1        OFFLINE OFFLINE                                                   
ora.gipcd
      1        ONLINE  ONLINE       rac1                                         
ora.gpnpd
      1        ONLINE  ONLINE       rac1                                         
ora.mdnsd
      1        ONLINE  ONLINE       rac1                                         
[root@RAC1 tmp]# 


初始化votdisk:


[root@RAC1 tmp]# crsctl replace votedisk +DATA
CRS-4256: Updating the profile
Successful addition of voting disk 34a63e23814e4fb9bf54732330e6e2c5.
Successfully replaced voting disk group with +DATA.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
[root@RAC1 tmp]# 


再重启一下crs:


[root@RAC1 tmp]# crsctl stop crs -f 
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1'
CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1'
CRS-2673: Attempting to stop 'ora.asm' on 'rac1'
CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1'
CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'rac1'
CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1'
CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded
CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1'
CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed
CRS-4133: Oracle High Availability Services has been stopped.
[root@RAC1 tmp]# 
[root@RAC1 tmp]# crsctl start crs 
CRS-4123: Oracle High Availability Services has been started.
[root@RAC1 tmp]# 


voting disk重新配置的信息如下:


2013-11-01 20:32:51.769
[cssd(18998)]CRS-1713:CSSD daemon is started in exclusive mode
2013-11-01 20:32:53.684
[ohasd(18804)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE
2013-11-01 20:32:56.417
[cssd(18998)]CRS-1709:Lease acquisition failed for node rac1 because no voting file has been configured; Details at (:CSSNM00031:) in /u01/11.2.0/grid/log/rac1/cssd/ocssd.log
2013-11-01 20:33:05.463
[cssd(18998)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 .
2013-11-01 20:33:10.167
[ctssd(19042)]CRS-2401:The Cluster Time Synchronization Service started on host rac1.
2013-11-01 20:33:10.168
[ctssd(19042)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1.
2013-11-01 20:34:41.754
[cssd(18998)]CRS-1605:CSSD voting file is online: /dev/asm-diskc; details in /u01/11.2.0/grid/log/rac1/cssd/ocssd.log.
2013-11-01 20:34:41.755
[cssd(18998)]CRS-1626:A Configuration change request completed successfully
2013-11-01 20:34:41.760
[cssd(18998)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 .


再次查询,以前的节点1,的节点号已经被修改为node 2了:


[root@RAC1 tmp]# olsnodes -s -t -n
rac2    1       Active  Unpinned
rac1    2       Active  Unpinned
[root@RAC1 tmp]# 


这里如果你不需要修改其他的节点,那么其他节点就从最小的没有被使用的数字中顺序读取,如果假设有8个节点,那么你可以指定其中的所有节点号,按照你希望的顺序,分别制定,方法同上面。

发表在 ASM, RAC | 标签为 , , | 留下评论

4种查询vot的方法和4种查询ocr的方法

一、查找voting disk 的4种方法
方法1:


[root@RAC1 ~]#  crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   983f5a2d804d4f81bfddc68e5bcf6e65 (/dev/asm-diskc) [DATA]
Located 1 voting disk(s).
[root@RAC1 ~]# 

方法2:


[grid@RAC1 ~]$ asmcmd dsget
parameter:/dev/asm*
profile:/dev/asm*
[grid@RAC1 ~]$ 

方法3:


[grid@RAC1 ~]$ gpnptool get
Warning: some command line parameters were defaulted. Resulting command line: 
         /u01/11.2.0/grid/bin/gpnptool.bin get -o-

<?xml version="1.0" encoding="UTF-8"?><gpnp:GPnP-Profile Version="1.0" xmlns="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:gpnp="http://www.grid-pnp.org/2005/11/gpnp-profile" xmlns:orcl="http://www.oracle.com/gpnp/2005/11/gpnp-profile" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.grid-pnp.org/2005/11/gpnp-profile gpnp-profile.xsd" ProfileSequence="4" ClusterUId="4c62524e6a7dff62ffdae005dc6a08d6" ClusterName="racdb" PALocation=""><gpnp:Network-Profile><gpnp:HostNetwork id="gen" HostName="*"><gpnp:Network id="net1" IP="192.168.56.0" Adapter="eth0" Use="public"/><gpnp:Network id="net2" IP="192.168.88.0" Adapter="eth1" Use="cluster_interconnect"/></gpnp:HostNetwork></gpnp:Network-Profile><orcl:CSS-Profile id="css" DiscoveryString="+asm" LeaseDuration="400"/><orcl:ASM-Profile id="asm" DiscoveryString="/dev/asm*" SPFile="+DATA/racdb/asmparameterfile/registry.253.814453247"/><ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#"><ds:SignedInfo><ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/><ds:SignatureMethod Algorithm="http://www.w3.org/2000/09/xmldsig#rsa-sha1"/><ds:Reference URI=""><ds:Transforms><ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/><ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"> <InclusiveNamespaces xmlns="http://www.w3.org/2001/10/xml-exc-c14n#" PrefixList="gpnp orcl xsi"/></ds:Transform></ds:Transforms><ds:DigestMethod Algorithm="http://www.w3.org/2000/09/xmldsig#sha1"/><ds:DigestValue>NMq7bQ3EHBa9wKwma65k60q9+lo=</ds:DigestValue></ds:Reference></ds:SignedInfo><ds:SignatureValue>hkf1TmKQMdERaBHuHx57vN/GxywzzxuZn0FjO0WqYpFWquF2mc+Pu0isVUHLvdhxzJ1snDmP+vVZ5V8Q1aaBi2KctXvHS1d81mY9PlXj47+/wIy38qZtg0vmazXscDG7rVfQb3jCtpa9vWE7S4Lq9p9Jq73emHvHk9FeTzYAs18=</ds:SignatureValue></ds:Signature></gpnp:GPnP-Profile>
Success.
[grid@RAC1 ~]$ 

方法4:


[grid@RAC1 ~]$ kfed dev=/dev/asm-diskc op=READ | egrep "kfdhdb.vf|ausize"
kfdhdb.ausize:                  1048576 ; 0x0bc: 0x00100000
kfdhdb.vfstart:                     192 ; 0x0ec: 0x000000c0
kfdhdb.vfend:                       224 ; 0x0f0: 0x000000e0
[grid@RAC1 ~]$ 

这里可以看到au是1M,voting disk从AU 192开始,到AU 224结束,共32个AU :


[grid@RAC1 ~]$ dd if=/dev/asm-diskc bs=1M skip=192 count=32 | od -c > /tmp/vote.out
32+0 records in
32+0 records out
33554432 bytes (34 MB) copied, 1.30292 seconds, 25.8 MB/s
[grid@RAC1 ~]$ 

跳过了头上的192M,dump了后面的32M内容,也就是我们需要的VOTING DISK的32个AU的内容

二、查找ocr的方法
方法1:


[grid@RAC1 ~]$ cat /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
local_only=FALSE
[grid@RAC1 ~]$ 

方法2:


[grid@RAC1 ~]$ ocrcheck
Status of Oracle Cluster Registry is as follows :
         Version                  :          3
         Total space (kbytes)     :     262120
         Used space (kbytes)      :       3056
         Available space (kbytes) :     259064
         ID                       : 1280092817
         Device/File Name         :      +DATA
                                    Device/File integrity check succeeded

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

                                    Device/File not configured

         Cluster registry integrity check succeeded

         Logical corruption check bypassed due to non-privileged user

[grid@RAC1 ~]$ 

方法3: ocrdump

方法4:


set lines 120
set pages 100
column name format a40 wra
column type format a10 tru
select af.group_number, af.file_number, aa.name, af.type, af.redundancy
from v$asm_file af, v$asm_alias aa
where aa.file_number = af.file_number and aa.group_number=af.group_number and af.type = 'OCRFILE' ;

SYS% racdb1> select af.group_number, af.file_number, aa.name, af.type, af.redundancy
  2  from v$asm_file af, v$asm_alias aa
  3  where aa.file_number = af.file_number and aa.group_number=af.group_number and af.type = 'OCRFILE' ;

GROUP_NUMBER FILE_NUMBER NAME                                     TYPE       REDUND
------------ ----------- ---------------------------------------- ---------- ------
           2         255 REGISTRY.255.814453249                   OCRFILE    UNPROT

Elapsed: 00:00:08.70
SYS% racdb1> 

这里看到ocr的文件号是255,可以根据文件号查询AU:


SQL> set lines 120
SQL> set pages 100
SQL> column name format a40 wra
SQL> column type format a10 tru
SQL> select af.group_number, af.file_number, aa.name, af.type, af.redundancy
  2  from v$asm_file af, v$asm_alias aa
  3  where aa.file_number = af.file_number and aa.group_number=af.group_number and af.type = 'OCRFILE' ;


GROUP_NUMBER FILE_NUMBER NAME                                     TYPE       REDUND
------------ ----------- ---------------------------------------- ---------- ------
           2         255 REGISTRY.255.814453249                   OCRFILE    UNPROT

SQL>
SQL> select NUMBER_KFFXP, XNUM_KFFXP, DISK_KFFXP, AU_KFFXP from X$KFFXP where NUMBER_KFFXP = 255;

NUMBER_KFFXP XNUM_KFFXP DISK_KFFXP   AU_KFFXP
------------ ---------- ---------- ----------
         255          0          1         29
         255          1          0         32
         255          2          1         30
         255          3          0         33
         255          4          1         31
         255          5          0         34
         255          6          1         32
         255          7          0         35
         255          8          1         33
         255          9          0         36
         255         10          1         34
         255         11          0         37
         255         12          1         35
         255         13          0         38
         255         14          1         36
         255         15          0         39
         255         16          1         37
         255         17          0         40
         255         18          1         38
         255         19          0         41
         255         20          1         39
         255         21          0         42
         255         22          1         40
         255         23          0         43
         255         24          1         41
         255         25          0         44
         255         26          1         42
         255         27          0         45
         255         28          1         43
         255         29          0         46
         255         30          1         44
         255         31          0         47
         255         32          1         45
         255         33          0         48
         255         34          1         46
         255         35          0         49
         255         36          1         47
         255         37          0         50
         255         38          1         48
         255         39          0         51
         255         40          1         49
         255         41          0         52
         255         42          1         50
         255         43          0         53
         255         44          1         51
         255         45          0         54
         255         46          1         52
         255         47          0         55
         255         48          1         53
         255         49          0         56
         255         50          1         54
         255         51          0         57
         255         52          1         55
         255         53          0         58
         255         54          1         56
         255         55          0         59
         255         56          1         57
         255         57          0         60
         255         58          1         58
         255         59          0         61
         255         60          1         59
         255         61          0         63
         255         62          1         60
         255         63          0         64
         255         64          1         61
         255         65          0         65
         255         66          1         62
         255         67          0         66
         255         68          1         63
         255         69          0         67
         255         70          1         64
         255         71          0         68
         255         72          1         65
         255         73          0         69
         255         74          1         66
         255         75          0         70
         255         76          1         67
         255         77          0         71
         255         78          1         68
         255         79          0         72
         255         80          1         69
         255         81          0         73
         255         82          1         70
         255         83          0         74
         255         84          1         71
         255         85          0         75
         255         86          1         72
         255         87          0         76
         255         88          1         73
         255         89          0         77
         255         90          1         74
         255         91          0         78
         255         92          1         75
         255         93          0         79
         255         94          1         76
         255         95          0         80
         255         96          1         77

NUMBER_KFFXP XNUM_KFFXP DISK_KFFXP   AU_KFFXP
------------ ---------- ---------- ----------
         255         97          0         81
         255         98          1         78
         255         99          0         82
         255        100          1         79
         255        101          0         83
         255        102          1         80
         255        103          0         84
         255        104          1         81
         255        105          0         85
         255        106          1         82
         255        107          0         86
         255        108          1         83
         255        109          0         87
         255        110          1         84
         255        111          0         88
         255        112          1         85
         255        113          0         89
         255        114          1         86
         255        115          0         90
         255        116          1         87
         255        117          0         91
         255        118          1         88
         255        119          0         92
         255        120          1         89
         255        121          0         93
         255        122          1         90
         255        123          0         94
         255        124          1         91
         255        125          0         95
         255        126          1         92
         255        127          0         96
         255        128          1         93
         255        129          0         97
         255        130          1         94
         255        131          0         98
         255        132          1         95
         255        133          0         99
         255        134          1         96
         255        135          0        100
         255        136          1         97
         255        137          0        101
         255        138          1         98
         255        139          0        102
         255        140          1         99
         255        141          0        103
         255        142          1        100
         255        143          0        104
         255        144          1        101
         255        145          0        105
         255        146          1        102
         255        147          0        106
         255        148          1        103
         255        149          0        107
         255        150          1        104
         255        151          0        108
         255        152          1        105
         255        153          0        109
         255        154          1        106
         255        155          0        110
         255        156          1        107
         255        157          0        111
         255        158          1        108
         255        159          0        112
         255        160          1        109
         255        161          0        113
         255        162          1        110
         255        163          0        114
         255        164          1        111
         255        165          0        115
         255        166          1        112
         255        167          0        116
         255        168          1        113
         255        169          0        117
         255        170          1        114
         255        171          0        118
         255        172          1        115
         255        173          0        119
         255        174          1        116
         255        175          0        120
         255        176          1        117
         255        177          0        121
         255        178          1        118
         255        179          0        122
         255        180          1        119
         255        181          0        123
         255        182          1        120
         255        183          0        124
         255        184          1        121
         255        185          0        125
         255        186          1        122
         255        187          0        126
         255        188          1        123
         255        189          0        127
         255        190          1        124
         255        191          0        128
         255        192          1        125
         255        193          0        129

NUMBER_KFFXP XNUM_KFFXP DISK_KFFXP   AU_KFFXP
------------ ---------- ---------- ----------
         255        194          1        126
         255        195          0        130
         255        196          1        127
         255        197          0        131
         255        198          1        128
         255        199          0        132
         255        200          1        129
         255        201          0        133
         255        202          1        130
         255        203          0        134
         255        204          1        131
         255        205          0        135
         255        206          1        132
         255        207          0        136
         255        208          1        133
         255        209          0        137
         255        210          1        134
         255        211          0        138
         255        212          1        135
         255        213          0        139
         255        214          1        136
         255        215          0        140
         255        216          1        137
         255        217          0        141
         255        218          1        138
         255        219          0        142
         255        220          1        139
         255        221          0        143
         255        222          1        140
         255        223          0        144
         255        224          1        141
         255        225          0        145
         255        226          1        142
         255        227          0        146
         255        228          1        143
         255        229          0        147
         255        230          1        144
         255        231          0        148
         255        232          1        145
         255        233          0        149
         255        234          1        146
         255        235          0        150
         255        236          1        147
         255        237          0        151
         255        238          1        148
         255        239          0        152
         255        240          1        149
         255        241          0        153
         255        242          1        150
         255        243          0        154
         255        244          1        151
         255        245          0        155
         255        246          1        152
         255        247          0        156
         255        248          1        153
         255        249          0        157
         255        250          1        154
         255        251          0        158
         255        252          1        155
         255        253          0        159
         255        254          1        156
         255        255          0        160
         255        256          1        157
         255        257          0        161
         255        258          1        158
         255        259          0        162
         255        260          1        159
         255 2147483648          0         62

262 rows selected.

SQL> 

发表在 ASM, RAC | 标签为 , , | 留下评论