考虑到节点逐出的规则,其中有一个跟节点号有关系,即缺省节点号小的被保留,大的被逐出(还有很多其他条件,比如分组等,这里不细说)
那天群里有人说了希望修改节点号的需求,今天忽然想起来试试看,结论如下:
1,可以使用ocrpatch任意指定任一个节点的节点号
2,不指定的情况,安装的节点为节点1,其余的顺次往下排
备份下当前OCR和VOT的信息:
[root@RAC1 ~]# olsnodes -s -t -n rac1 1 Active Unpinned rac2 2 Active Unpinned [root@RAC1 ~]# [root@RAC1 ~]# ocrconfig -showbackup rac1 2013/11/01 19:37:35 /u01/11.2.0/grid/cdata/racdb/backup00.ocr rac1 2013/11/01 15:37:33 /u01/11.2.0/grid/cdata/racdb/backup01.ocr rac1 2013/11/01 15:37:33 /u01/11.2.0/grid/cdata/racdb/day.ocr rac1 2013/11/01 15:37:33 /u01/11.2.0/grid/cdata/racdb/week.ocr PROT-25: Manual backups for the Oracle Cluster Registry are not available [root@RAC1 ~]#
这里,我们可以看见,节点1(rac1)的节点号是1,节点2(rac2)的节点号是2。。。
我打算把它修改为节点1(rac1)的节点号是2,节点2(rac2)的节点号是1
[root@RAC1 ~]# crsctl query css votedisk ## STATE File Universal Id File Name Disk group -- ----- ----------------- --------- --------- 1. ONLINE 983f5a2d804d4f81bfddc68e5bcf6e65 (/dev/asm-diskc) [DATA] Located 1 voting disk(s). [root@RAC1 ~]#
只读模式使用ocrpatch:
[root@RAC1 tmp]# ./ocrpatch.bin OCR/OLR Patch Tool Version 11.2 (20100325) Oracle Clusterware Release 11.2.0.3.0 Copyright (c) 2005, 2013, Oracle. All rights reserved. Initialization - please wait [WARN] local clusterware stack is running [INFO] OCR checked out OK DISCLAIMER: * USE OCRPATCH AT YOUR OWN RISK. * TAKE OCR BACKUP BEFORE MAKING ANY CHANGES. * ENSURE THAT ORACLE CLUSTERWARE IS NOT RUNNING ON ANY CLUSTER NODE * FAILURE TO PATCH OCR PROPERLY MAY CAUSE FAILURE TO START THE CLUSTERWARE STACK. OCR device information: open mode ......... : READ-ONLY device 0 ......... : OPEN, +DATA device 1 ......... : NOT_CONFIGURED, N/A device 2 ......... : NOT_CONFIGURED, N/A device 3 ......... : NOT_CONFIGURED, N/A device 4 ......... : NOT_CONFIGURED, N/A selected device(s) : ANY [INFO] operating in READ-ONLY mode [INFO] Certain functionality is disabled in this mode ocrpatch>
好了,现在我们来修改下
再开2个会话,分别用于停止节点1和节点2的crs:
[root@RAC1 ~]# crsctl stop crs CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.crsd' on 'rac1' CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER_DG.lsnr' on 'rac1' CRS-2673: Attempting to stop 'ora.oc4j' on 'rac1' CRS-2673: Attempting to stop 'ora.DATA.dg' on 'rac1' CRS-2673: Attempting to stop 'ora.racdb.db' on 'rac1' CRS-2673: Attempting to stop 'ora.LISTENER.lsnr' on 'rac1' CRS-2677: Stop of 'ora.LISTENER_DG.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.r-dg1-vip.vip' on 'rac1' CRS-2677: Stop of 'ora.LISTENER.lsnr' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.rac1.vip' on 'rac1' CRS-2677: Stop of 'ora.r-dg1-vip.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.r-dg1-vip.vip' on 'rac2' CRS-2677: Stop of 'ora.rac1.vip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.rac1.vip' on 'rac2' CRS-2677: Stop of 'ora.racdb.db' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ASMDATA.dg' on 'rac1' CRS-2676: Start of 'ora.r-dg1-vip.vip' on 'rac2' succeeded CRS-2676: Start of 'ora.rac1.vip' on 'rac2' succeeded CRS-2677: Stop of 'ora.ASMDATA.dg' on 'rac1' succeeded CRS-2677: Stop of 'ora.oc4j' on 'rac1' succeeded CRS-2677: Stop of 'ora.DATA.dg' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ons' on 'rac1' CRS-2673: Attempting to stop 'ora.net2.network' on 'rac1' CRS-2677: Stop of 'ora.net2.network' on 'rac1' succeeded CRS-2677: Stop of 'ora.ons' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.net1.network' on 'rac1' CRS-2677: Stop of 'ora.net1.network' on 'rac1' succeeded CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'rac1' has completed CRS-2677: Stop of 'ora.crsd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2673: Attempting to stop 'ora.evmd' on 'rac1' CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2677: Stop of 'ora.evmd' on 'rac1' succeeded CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded
注意这里,节点1,貌似hang住了。。
节点2已经clear shutdown了
于是想起来了,还有一个ocrpatch的窗口,退出后,大概几秒钟,继续shutdown:
ocrpatch> quit [OK] Exiting due to user request ... [root@RAC1 tmp]# CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1' CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.crf' on 'rac1' CRS-2677: Stop of 'ora.crf' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@RAC1 ~]# [root@RAC1 ~]#
在节点1以独占模式启动cluster:
[root@RAC1 tmp]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1' CRS-2672: Attempting to start 'ora.ctssd' on 'rac1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1' CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded [root@RAC1 tmp]# [root@RAC1 tmp]# crsctl status res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE rac1 Started ora.cluster_interconnect.haip 1 ONLINE ONLINE rac1 ora.crf 1 OFFLINE OFFLINE ora.crsd 1 OFFLINE OFFLINE ora.cssd 1 ONLINE ONLINE rac1 ora.cssdmonitor 1 ONLINE ONLINE rac1 ora.ctssd 1 ONLINE ONLINE rac1 ACTIVE:0 ora.diskmon 1 OFFLINE OFFLINE ora.evmd 1 OFFLINE OFFLINE ora.gipcd 1 ONLINE ONLINE rac1 ora.gpnpd 1 ONLINE ONLINE rac1 ora.mdnsd 1 ONLINE ONLINE rac1 [root@RAC1 tmp]#
把voting disk放到文件系统上:
[root@RAC1 tmp]# crsctl replace votedisk /tmp/vote.dbf Now formatting voting disk: /tmp/vote.dbf. CRS-4256: Updating the profile Successful addition of voting disk 8653b0664b074fdebf5f64b7e7ad539b. Successful deletion of voting disk 983f5a2d804d4f81bfddc68e5bcf6e65. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced [root@RAC1 tmp]# 停止crs: [root@RAC1 tmp]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1' CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@RAC1 tmp]# ps -ef|grep d.bin grid 18092 1 0 20:15 ? 00:00:01 /u01/11.2.0/grid/bin/oraagent.bin root 18125 1 0 20:15 ? 00:00:00 /u01/11.2.0/grid/bin/cssdmonitor root 18146 1 0 20:15 ? 00:00:00 /u01/11.2.0/grid/bin/cssdagent root 18208 1 1 20:16 ? 00:00:01 /u01/11.2.0/grid/bin/orarootagent.bin root 18498 15161 0 20:18 pts/3 00:00:00 grep d.bin [root@RAC1 tmp]# [root@RAC1 tmp]# ls -lrt /tmp/vote.dbf -rw-r----- 1 grid oinstall 21004800 Nov 1 20:18 /tmp/vote.dbf [root@RAC1 tmp]# ls -lrt /home/grid/vote.bak -rw-r----- 1 root root 21004800 Nov 1 20:19 /home/grid/vote.bak [root@RAC1 tmp]#
以write read方式访问ocr:
[root@RAC1 tmp]# ./ocrpatch -l -u OCR/OLR Patch Tool Version 11.2 (20100325) Oracle Clusterware Release 11.2.0.3.0 Copyright (c) 2005, 2013, Oracle. All rights reserved. Initialization - please wait [INFO] OLR checked out OK DISCLAIMER: * USE OCRPATCH AT YOUR OWN RISK. * TAKE OLR BACKUP BEFORE MAKING ANY CHANGES. * ENSURE THAT ORACLE CLUSTERWARE IS NOT RUNNING ON ANY CLUSTER NODE * FAILURE TO PATCH OLR PROPERLY MAY CAUSE FAILURE TO START THE CLUSTERWARE STACK. OLR device information: open mode ......... : READ-WRITE device 0 ......... : OPEN, /u01/11.2.0/grid/cdata/rac1.olr ocrpatch> 看下主要功能: ocrpatch> h Usage: ocrpatch [-u] [-l] || [-v] || [-b <backupfile>] ocrpatch open OCR in read-only mode ocrpatch -u in read-write mode ocrpatch -l open OLR in read-only mode ocrpatch -u -l in read-write mode ocrpatch -b <backupfile> open OCR backup file in read-only mode ocrpatch -v show ocrpatch version information KEY operations gv <key> get key value ek <key> enumerate subkeys for key gks <key> get key security attributes sv <key> <dtype> <value> set key value datatype: (u)b4|(o)ratext| (b)ytestream|(ubi)g_ora ck <key>.<subkey> create key.subkey ckv <key>.<subkey> <dtype> <val> create key.subkey + setval datatype: (u)b4|(o)ratext| (b)ytestream|(ubi)g_ora mk <srckey> <tgtkey> move srckey to tgtkey.subkey dv <key> delete key value dk <key> delete key dkr <key> delete key and all subkeys sku <key> <username> [<group>] set key username[+group] skp <key> <realm> <permission> set key permissions realm: (u)ser|(g)roup|(o)ther perm: (n)one|(q)uery_key|(e)numerate_sub_keys| (r)ead|(s)et_key|(create_l)ink|(create_s)ub_key| (d)elete_key|(a)ll_access sb start/init batch eb execute batch tb terminate batch BLOCK operations rb <block#>|<key> read block by block# / key name dn display native block from offset di display interpreted block du display 4k block, native mode of <offset> set offset in block, range 0-4095 fs <string> find pattern ms <string> modify buffer at block/offset wb write modified block set <parameter> set parameters parameter: (m)ode switch between HEX and CHAR (f)ind switch between FORWARD and BACKWARD MISC operations setenv <envvar> <value> set environment variable value unsetenv <envvar> unset environment variable value getenv <envvar> get environment variable value spool on|off set SPOOL on|off ocrdmp dump all OLR context i show parameters, version, info h this help screen exit / quit exit ocrpatch All commands support spooling the output to trace file when spool is ON. Commands that attempt or perform a modification are always logged. ocrpatch>
SYSTEM.css.nodenum_hint ,这个表示他们的 “preferred” node number ,这个是节点1,我们看到设置为1,现在,我们把它设置为2,然后观察下:
ocrpatch> gv SYSTEM.css.nodenum_hint [OK] Read key <SYSTEM.css.nodenum_hint>, type=1 (UB4), size=4, value=1 ocrpatch> ocrpatch> sv SYSTEM.css.nodenum_hint u 2 [OK] Read key <SYSTEM.css.nodenum_hint>, type=1 (UB4), size=4, value=1 [OK] Deleted value for key <SYSTEM.css.nodenum_hint> [OK] Set value for key <SYSTEM.css.nodenum_hint> [OK] Read key <SYSTEM.css.nodenum_hint>, type=1 (UB4), size=4, value=2 ocrpatch> ocrpatch> gv SYSTEM.css.nodenum_hint [OK] Read key <SYSTEM.css.nodenum_hint>, type=1 (UB4), size=4, value=2 ocrpatch>
已经修改成功了。
ocrpatch> exit
[OK] Exiting due to user request …
[root@RAC1 tmp]#
现在,使用独占模式启动crs:
[root@RAC1 tmp]# crsctl start crs -excl -nocrs CRS-4123: Oracle High Availability Services has been started. CRS-2672: Attempting to start 'ora.mdnsd' on 'rac1' CRS-2676: Start of 'ora.mdnsd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac1' CRS-2676: Start of 'ora.gpnpd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac1' CRS-2672: Attempting to start 'ora.gipcd' on 'rac1' CRS-2676: Start of 'ora.cssdmonitor' on 'rac1' succeeded CRS-2676: Start of 'ora.gipcd' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac1' CRS-2672: Attempting to start 'ora.diskmon' on 'rac1' CRS-2676: Start of 'ora.diskmon' on 'rac1' succeeded CRS-2676: Start of 'ora.cssd' on 'rac1' succeeded CRS-2679: Attempting to clean 'ora.cluster_interconnect.haip' on 'rac1' CRS-2672: Attempting to start 'ora.ctssd' on 'rac1' CRS-2681: Clean of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac1' CRS-2676: Start of 'ora.ctssd' on 'rac1' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac1' CRS-2676: Start of 'ora.asm' on 'rac1' succeeded [root@RAC1 tmp]# [root@RAC1 tmp]# ps -ef|grep d.bin root 18804 1 3 20:32 ? 00:00:02 /u01/11.2.0/grid/bin/ohasd.bin exclusive grid 18918 1 0 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/oraagent.bin grid 18930 1 0 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/mdnsd.bin grid 18940 1 0 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/gpnpd.bin root 18951 1 0 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/cssdmonitor grid 18953 1 0 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/gipcd.bin root 18974 1 0 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/cssdagent grid 18998 1 1 20:32 ? 00:00:00 /u01/11.2.0/grid/bin/ocssd.bin -X root 19029 1 2 20:33 ? 00:00:00 /u01/11.2.0/grid/bin/orarootagent.bin root 19042 1 0 20:33 ? 00:00:00 /u01/11.2.0/grid/bin/octssd.bin root 19185 15161 0 20:33 pts/3 00:00:00 grep d.bin [root@RAC1 tmp]#
检查状态,都正常:
[root@RAC1 tmp]# crsctl status res -t -init -------------------------------------------------------------------------------- NAME TARGET STATE SERVER STATE_DETAILS -------------------------------------------------------------------------------- Cluster Resources -------------------------------------------------------------------------------- ora.asm 1 ONLINE ONLINE rac1 Started ora.cluster_interconnect.haip 1 ONLINE ONLINE rac1 ora.crf 1 OFFLINE OFFLINE ora.crsd 1 OFFLINE OFFLINE ora.cssd 1 ONLINE ONLINE rac1 ora.cssdmonitor 1 ONLINE ONLINE rac1 ora.ctssd 1 ONLINE ONLINE rac1 ACTIVE:0 ora.diskmon 1 OFFLINE OFFLINE ora.evmd 1 OFFLINE OFFLINE ora.gipcd 1 ONLINE ONLINE rac1 ora.gpnpd 1 ONLINE ONLINE rac1 ora.mdnsd 1 ONLINE ONLINE rac1 [root@RAC1 tmp]#
初始化votdisk:
[root@RAC1 tmp]# crsctl replace votedisk +DATA CRS-4256: Updating the profile Successful addition of voting disk 34a63e23814e4fb9bf54732330e6e2c5. Successfully replaced voting disk group with +DATA. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced [root@RAC1 tmp]#
再重启一下crs:
[root@RAC1 tmp]# crsctl stop crs -f CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac1' CRS-2673: Attempting to stop 'ora.mdnsd' on 'rac1' CRS-2673: Attempting to stop 'ora.ctssd' on 'rac1' CRS-2673: Attempting to stop 'ora.asm' on 'rac1' CRS-2677: Stop of 'ora.mdnsd' on 'rac1' succeeded CRS-2677: Stop of 'ora.asm' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'rac1' CRS-2677: Stop of 'ora.ctssd' on 'rac1' succeeded CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.cssd' on 'rac1' CRS-2677: Stop of 'ora.cssd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gipcd' on 'rac1' CRS-2677: Stop of 'ora.gipcd' on 'rac1' succeeded CRS-2673: Attempting to stop 'ora.gpnpd' on 'rac1' CRS-2677: Stop of 'ora.gpnpd' on 'rac1' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac1' has completed CRS-4133: Oracle High Availability Services has been stopped. [root@RAC1 tmp]# [root@RAC1 tmp]# crsctl start crs CRS-4123: Oracle High Availability Services has been started. [root@RAC1 tmp]#
voting disk重新配置的信息如下:
2013-11-01 20:32:51.769 [cssd(18998)]CRS-1713:CSSD daemon is started in exclusive mode 2013-11-01 20:32:53.684 [ohasd(18804)]CRS-2767:Resource state recovery not attempted for 'ora.diskmon' as its target state is OFFLINE 2013-11-01 20:32:56.417 [cssd(18998)]CRS-1709:Lease acquisition failed for node rac1 because no voting file has been configured; Details at (:CSSNM00031:) in /u01/11.2.0/grid/log/rac1/cssd/ocssd.log 2013-11-01 20:33:05.463 [cssd(18998)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 . 2013-11-01 20:33:10.167 [ctssd(19042)]CRS-2401:The Cluster Time Synchronization Service started on host rac1. 2013-11-01 20:33:10.168 [ctssd(19042)]CRS-2407:The new Cluster Time Synchronization Service reference node is host rac1. 2013-11-01 20:34:41.754 [cssd(18998)]CRS-1605:CSSD voting file is online: /dev/asm-diskc; details in /u01/11.2.0/grid/log/rac1/cssd/ocssd.log. 2013-11-01 20:34:41.755 [cssd(18998)]CRS-1626:A Configuration change request completed successfully 2013-11-01 20:34:41.760 [cssd(18998)]CRS-1601:CSSD Reconfiguration complete. Active nodes are rac1 .
再次查询,以前的节点1,的节点号已经被修改为node 2了:
[root@RAC1 tmp]# olsnodes -s -t -n rac2 1 Active Unpinned rac1 2 Active Unpinned [root@RAC1 tmp]#
这里如果你不需要修改其他的节点,那么其他节点就从最小的没有被使用的数字中顺序读取,如果假设有8个节点,那么你可以指定其中的所有节点号,按照你希望的顺序,分别制定,方法同上面。