首页 > 免root版 > 帮gg修改器没有root怎么办_gg修改器没root能用么
帮gg修改器没有root怎么办_gg修改器没root能用么
  • 帮gg修改器没有root怎么办_gg修改器没root能用么

  • 大小:14.76MB日期:2024-04-19 23:02:02
  • 语言:简体中文系统:Android
绿色无毒,安全可靠!部分设备误报拦截请通过!

应用详情

大家好,今天小编为大家分享关于帮gg修改器没有root怎么办_gg修改器没root能用么的内容,赶快来一起来看看吧。

华为云平台安装11g rac,节点1执行root.sh正常完成,节点2执行root.sh时报错,日志如下:

root.sh报错记录:
db2 oraInventory]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to oracle-ohasd.service
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node TESTdb1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Start of resource "ora.asm" failed
CRS-2672: Attempting to start ’ora.asm’ on ’TESTdb2’
CRS-5017: The resource action "ora.asm start" encountered the following error:
ORA-03113: end-of-file munication channel
Process ID: 0
Session ID: 0 Serial number: 0
. For details refer to "(:CLSN00107:)" in "/u01/app/11.2.0/grid/log/TESTdb2/agent/ohasd/oraagent_grid/oraagent_grid.log".
CRS-2674: Start of ’ora.asm’ on ’TESTdb2’ failed
CRS-2679: Attempting to clean ’ora.asm’ on ’TESTdb2’
CRS-2681: Clean of ’ora.asm’ on ’TESTdb2’ succeeded
CRS-4000: Command Start failed, pleted with errors.
Failed to start Oracle Grid Infrastructure stack
Failed to start ASM at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 1339.
/u01/app/11.2.0/grid/perl/bin/perl -I/u01/app/11.2.0/grid/perl/lib -I/u01/app/11.2.0/grid/crs/install /u01/app/11.2.0/grid/crs/install/rootcrs.pl execution failed
[root@TESTdb2 oraInventory]#

由于root.sh报错信息提示asm无法启动,查看asm日志如下

节点2 asm_alert
* Load Monitor used for high load check
* New Low - High Load Threshold Range = [7680 - 10240]
Sat Dec 11 11:29:15 2021
LMS0 started with pid=11, OS id=104207 at elevated priority
Sat Dec 11 11:29:15 2021
LMHB started with pid=12, OS id=104211
Sat Dec 11 11:29:15 2021
MMAN started with pid=13, OS id=104213
Sat Dec 11 11:29:15 2021
DBW0 started with pid=14, OS id=104215
Sat Dec 11 11:29:15 2021
LGWR started with pid=15, OS id=104217
Sat Dec 11 11:29:16 2021
CKPT started with pid=16, OS id=104219
Sat Dec 11 11:29:16 2021
SMON started with pid=17, OS id=104221
Sat Dec 11 11:29:16 2021
RBAL started with pid=18, OS id=104223
Sat Dec 11 11:29:16 2021
GMON started with pid=19, OS id=104225
Sat Dec 11 11:29:16 2021
MMON started with pid=20, OS id=104227
Sat Dec 11 11:29:16 2021
MMNL started with pid=21, OS id=104229
lmon registered with NM - instance number 2 (internal mem no 1)
Sat Dec 11 11:31:13 2021
System state dump requested by (instance=2, osid=104186 (PMON)), summary=[abnormal instance termination].
System State dumped to trace file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_diag_104197_20211211113113.trc
Sat Dec 11 11:31:13 2021
PMON (ospid: 104186): terminating the instance due to error 481
Sat Dec 11 11:31:13 2021
ORA-1092 : opitsk aborting process
Dumping diagnostic data in directory=[cdmp_20211211113113], requested by (instance=2, osid=104186 (PMON)), summary=[abnormal instance termination].
Instance terminated by PMON, pid = 104186

ASM-alert日志:

System state dump requested by (instance=2, osid=104186 (PMON)), summary=[abnormal instance termination].

System State dumped to trace file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_diag_104197_20211211113113.trc

Sat Dec 11 11:31:13 2021

PMON (ospid: 104186): terminating the instance due to error 481

Sat Dec 11 11:31:13 2021

ORA-1092 : opitsk aborting process

继续查看产生的trc文件

file /u01/app/grid/diag/asm/+asm/+ASM2/trace/+ASM2_diag_104197_20211211113113.trc
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
ORACLE_HOME = /u01/app/11.2.0/grid
System name: Linux
Node name: db2
Release: 3.10.0-1062.el7.x86_64
Version: #1 SMP Wed Aug 7 18:08:02 UTC 2019
Machine: x86_64
Instance name: +ASM2
Redo thread mounted by this instance: 0 <none>
Oracle process number: 6
Unix process pid: 104197, image: oracle@db2 (DIAG)

*** 2021-12-11 11:31:13.069
*** SESSION ID:(787.1) 2021-12-11 11:31:13.069
*** CLIENT ID:() 2021-12-11 11:31:13.069
*** SERVICE NAME:() 2021-12-11 11:31:13.069
*** MODULE NAME:() 2021-12-11 11:31:13.069
*** ACTION NAME:() 2021-12-11 11:31:13.069

kjzdattdlm: Can not attach to DLM (LMON up=[TRUE], DB mounted=[FALSE]).
===================================================
SYSTEM STATE (level=10)
------------
System global information:
processes: base 0xa2f09820, size 680, cleanup 0xa2f64960
allocation: free sessions 0xa2a782b0, free calls (nil)
control alloc errors: 0 (process), 0 (session), 0 (call)
PMON latch cleanup depth: 0
seconds since PMON’s last scan for dead processes: 55
system statistics:
0 OS CPU Qt wait time
0 Requests to/from client

kjzdattdlm: Can not attach to DLM (LMON up=[TRUE], DB mounted=[FALSE]).

查看mos发现《ASM on Non-First Node (Second or Others) Fails to Start: PMON (ospid: nnnn): terminating the instance due to error 481 (Doc ID 1383737.1)》和故障现象比较相似。

导致此问题的原因是由haip导致的,大致场景如下:

场景1:169.254地址段被网络中其他网络使用,导致地址有冲突,导致地址无法通讯,停止使用冲突网段的设备。

场景2:节点直接心跳网络有防火墙存在,如iptables或物理防火墙等,导致地址无法通讯,开通安全策略解决。

场景3:haip没有在所有节点正常启动,使用如下命令启动haip

查看haip状态
$GRID_HOME/bin/crsctl stat res ora.cluster_interconnect.haip -init
启动haip
$GRID_HOME/bin/crsctl start res ora.cluster_interconnect.haip -init
如果启动haip失败则在分析原因。

场景4:haip启动正常,但是有些节点没有路由信息

netstat -rn
Destination Gateway Genmask Flags MSS Window irtt Iface
<IP ADDRESS> 0.0.0.0 255.255.248.0 U 0 0 0 bond0
<IP ADDRESS> 0.0.0.0 255.255.255.0 U 0 0 0 bond2
0.0.0.0 <IP ADDRESS> 0.0.0.0 UG 0 0 0 bond0

The line for HAIP is missing, i.e:
169.254.x.x 0.0.0.0 255.255.0.0 U 0 0 0 bond2

# route add -net <HAIP subnet ID> netmask <HAIP subnet netmask> dev <private network adapter>
i.e.
# route add -net 169.254.x.x netmask 255.255.0.0 dev bond2
添加完毕后重启crs
# $GRID_HOME/bin/crsctl start res ora.crsd -init
或执行如下
"crsctl stop crs -f" and "crsctl start crs"

场景5:haip正常启动路由也存在,但是haip无法ping通。

此场景和我遇到的场景比较类似。

查看路由信息haip路由存在
[root@db2 system]# netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 1xxxxx 0.0.0.0 UG 0 0 0 eth0
xxxxxxx.0 0.0.0.0 255.255.255.128 U 0 0 0 eth0
169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 #haip路由存在
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

查看haip状态为online状态
db2:/home/grid>crsctl stat res -t -init
--------------------------------------------------------------------------------
NAME TARGET STATE SERVER STATE_DETAILS
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE OFFLINE Instance Shutdown
ora.cluster_interconnect.haip #haip为online状态
1 ONLINE ONLINE bhpzkdb2
ora.crf
1 ONLINE ONLINE bhpzkdb2
ora.crsd
1 OFFLINE OFFLINE
ora.cssd
1 ONLINE ONLINE bhpzkdb2
ora.cssdmonitor
1 ONLINE ONLINE bhpzkdb2
ora.ctssd
1 ONLINE ONLINE bhpzkdb2 OBSERVER
ora.diskmon
1 OFFLINE OFFLINE
ora.evmd
1 OFFLINE OFFLINE
ora.gipcd
1 ONLINE ONLINE bhpzkdb2
ora.gpnpd
1 ONLINE ONLINE bhpzkdb2
ora.mdnsd
1 ONLINE ONLINE bhpzkdb2
ip a查看网卡地址:haip地址正常附加在私有网卡上。
eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether fa:16:3e:19:a9:0e brd ff:ff:ff:ff:ff:ff
inet 192.168.xxx/24 brd 192.168.0.255 scope global noprefixroute eth1
valid_lft forever preferred_lft forever
inet 169.254.196.83/16 brd 169.254.255.255 scope global eth1:1
valid_lft forever preferred_lft forever
inet6 fe80::8f85:3337:ded0:7f70/64 scope link noprefixroute
valid_lft forever preferred_lft forever
节点直接的haip ping测试 从db1ping db2的haip发现无法通讯。
db1:/home/grid>ping 169.254.196.83
PING 169.254.196.83 (169.254.196.83) 56(84) bytes of data.
From 169.254.61.206 icmp_seq=1 Destination Host Unreachable
From 169.254.61.206 icmp_seq=2 Destination Host Unreachable
From 169.254.61.206 icmp_seq=3 Destination Host Unreachable
From 169.254.61.206 icmp_seq=4 Destination Host Unreachable

Solution: oracle给的解决办法如下,理解不太清楚大体的意思是让云管理员额外开通一个独立的网络用于169地址段的通讯。

For Openstack Cloud implementation, engage system admin to create another neutron port to map link-local traffic. For other environment, engage SysAdmin/NetworkAdmin to review routing/network setup.

对于Openstack云实现,请让系统管理员创建另一个中子端口来映射链接本地流量。对于其他环境,请联系系统管理员/网络管理员查看路由/网络设置。

但是由于当时无法联系到云平台管理员,又加之业务上线的压力决定不使用haip进行配置。本来环境的私有网卡就一块也不需要额外的高可用保护。

步骤如下:

1、由于节点2执行root.sh失败了需要执行一下deinstall

$GRID_HOME/crs/install/rootcrs.pl -verbose -deconfig -force
Can’t locate Env.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 . /u01/app/11.2.0/grid/crs/install) at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 703.
BEGIN failed--compilation aborted at /u01/app/11.2.0/grid/crs/install/crsconfig_lib.pm line 703.
Compilation failed in require at /u01/app/11.2.0/grid/crs/install/roothas.pl line 166.
deinstall 失败需要安装 yum install perl-Env软件包。
再次执行正常。

2、节点1root.sh已经执行成功,修改cluster_interconnects参数强制使用私有地址通讯

grid用户
sqlplus / as sysasm
create pfile=’/home/grid/pfileasm.ora’ from spfile;
alter system set cluster_interconnects=’192.168.0.xx’ scope=spfile sid=’+ASM1’; #指定节点1心跳地址
alter system set cluster_interconnects=’192.168.0.xx’ scope=spfile sid=’+ASM2’; #指定节点2心跳地址
crsctl stop resource -all #停止时间会很长,查看状态时asm一直处于stoped状态,最后直接进asm实例执行shutdown abort
或直接执行crsctl stop crs -f命令
crsctl start resource -all #再次启动资源

3、节点2再次执行root.sh命令

db2 ~]# /u01/app/11.2.0/grid/root.sh
Performing root user operation for Oracle 11g

The following environment variables are set as:
ORACLE_OWNER= grid
ORACLE_HOME= /u01/app/11.2.0/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to oracle-ohasd.service
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node bhpzkdb1, number 1, and is terminati ng
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

4、至此grid成功完成安装。

下面是HAIP的一些介绍:仅作总结。

11.2.0.2开始数据库的私有网络可以不依赖第三方的端口绑定技术(bond,team)等,多个私有网卡可以在安装时候指定或这安装完毕后通过oficfg命令添加。

grid一次最多可以激活4个private网卡,同时ora.cluster_interconnect.haip资源会根据私有网卡的数量,启动不同的haip(最多4个),用于Oracle RAC、Oracle ASM和Oracle ACFS等的互连通信。同时HAIP会使用保留的169.254.*中选择自由链接本地地址HAIP的子网。根据RFC-3927,链接本地子网169.254。*不得用于任何其他目的。

使用HAIP,默认情况下,互连流量将在所有活动互连接口之间进行负载平衡,如果一个适配器出现故障或无法通信,则相应的HAIP地址将透明地故障转移到其他适配器。

配置GI后,可以使用“GRID_HOM/bin/oifcfg setif”命令添加更多的专用网络接口。HAIP地址的数量取决于集群中第一个节点上有多少专用网络适配器处于活动状态。

网卡和HAIP对应数量:

  1. 一个活动的专用网络,网格将创建一个;单块网卡不会用到haip的高可用技术。至少两块
  2. 两个,网格将创建两个;
  3. 超过两个,网格将创建四个HAIP。
  4. 即使以后激活更多的专用网络适配器,HAIP的数量也不会改变,需要在所有节点上重新启动clusterware才能改变数量,但是,新激活的适配器可以用于故障转移目的。

注意:11.2.0.2以后如果采用HAIP提供冗余,不同的私有网卡必须配置为不同的子网,否则当出现私有网卡出现中断时可能导致节点重启。

查看HAIP状态

grid@dbrac215:/home/grid>crsctl stat res -t -init
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.asm
1 ONLINE ONLINE dbrac215 STABLE
ora.cluster_interconnect.haip #haip资源为online状态
1 ONLINE ONLINE dbrac215 STABLE
....................................
grid@dbrac215:/home/grid>oifcfg getif
ens192 10.0.1.0 global public
ens256 192.168.153.0 global cluster_interconnect,asm
grid@dbrac215:/home/grid>oifcfg iflist -p -n
ens192 10.0.1.0 PRIVATE 255.255.255.0
ens256 192.168.153.0 PRIVATE 255.255.255.0
ens256 169.254.0.0 UNKNOWN 255.255.224.0
grid@dbrac215:/home/grid>sqlplus / as sysasm #grid用户
SQL> select name,ip_address from v$cluster_interconnects;
NAME IP_ADDRESS
--------------- ----------------------------------------------
ens256:1 169.254.17.250
oracle@dbrac215:/home/oracle>sqlplus / as sysdba #oracle用户
SQL> select name,ip_address from v$cluster_interconnects;

NAME IP_ADDRESS
--------------- ----------------------------------------------
ens256:1 169.254.17.250
通过ip命令查看
ens256: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 00:50:56:80:e7:93 brd ff:ff:ff:ff:ff:ff
inet 192.168.153.215/24 brd 192.168.153.255 scope global noprefixroute ens256
valid_lft forever preferred_lft forever
inet 169.254.17.250/19 brd 169.254.31.255 scope global ens256:1
valid_lft forever preferred_lft forever
inet6 fe80::22e:a1cb:7bf9:197f/64 scope link noprefixroute
valid_lft forever preferred_lft forever

haip相关的日志文件:

Resource haip is managed by ohasd.bin, resource log is located in

$GRID_HOME/log/<nodename>/ohasd/ohasd.log

$GRID_HOME/log/<nodename>/agent/ohasd/orarootagent_root/orarootagent_root.log

L1. Log Sample When Private Network Adapter Fails

In a multiple private network adapter environment, if one of the adapters fails:

ohasd.log

2010-09-24 09:10:00.891: [GIPCHGEN][1083025728]gipchaInterfaceFail: marking interface failing 0x2aaab0269a10 { host ’’, haName ’CLSFRAME_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x4d }
2010-09-24 09:10:00.902: [GIPCHGEN][1138145600]gipchaInterfaceDisable: disabling interface 0x2aaab0269a10 { host ’’, haName ’CLSFRAME_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x1cd }
2010-09-24 09:10:00.902: [GIPCHDEM][1138145600]gipchaWorkerCleanInterface: performing cleanup of disabled interface 0x2aaab0269a10 { host ’’, haName ’CLSFRAME_a2b2’, local (nil), ip ’10.11.0.188’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x1ed }

orarootagent_root.log

2010-09-24 09:09:57.708: [ USRTHRD][1129138496] {0:0:2} failed to receive ARP request
2010-09-24 09:09:57.708: [ USRTHRD][1129138496] {0:0:2} Assigned IP 169.254.x.x no longer valid on inf eth6
2010-09-24 09:09:57.708: [ USRTHRD][1129138496] {0:0:2} VipActions::startIp {
2010-09-24 09:09:57.708: [ USRTHRD][1129138496] {0:0:2} Adding 169.254.x.x on eth6:1
2010-09-24 09:09:57.719: [ USRTHRD][1129138496] {0:0:2} VipActions::startIp }
2010-09-24 09:09:57.719: [ USRTHRD][1129138496] {0:0:2} Reassigned IP: 169.254.x.x on interface eth6
2010-09-24 09:09:58.013: [ USRTHRD][1082325312] {0:0:2} HAIP: Updating member info HAIP1;10.11.x.x#0;10.11.x.x#1
2010-09-24 09:09:58.015: [ USRTHRD][1082325312] {0:0:2} HAIP: Moving ip ’169.254.x.x’ from inf ’eth6’ to inf ’eth7’
2010-09-24 09:09:58.015: [ USRTHRD][1082325312] {0:0:2} pausing thread
2010-09-24 09:09:58.015: [ USRTHRD][1082325312] {0:0:2} posting thread
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]start {
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]start }
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} HAIP: Moving ip ’169.254.x.x’ from inf ’eth1’ to inf ’eth7’
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} pausing thread
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} posting thread
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]start {
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]start }
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} HAIP: Moving ip ’169.254.x.x’ from inf ’eth7’ to inf ’eth1’
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} pausing thread
2010-09-24 09:09:58.016: [ USRTHRD][1082325312] {0:0:2} posting thread
2010-09-24 09:09:58.017: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]start {
2010-09-24 09:09:58.017: [ USRTHRD][1116531008] {0:0:2} [NetHAWork] thread started
2010-09-24 09:09:58.017: [ USRTHRD][1116531008] {0:0:2} Arp::sCreateSocket {
2010-09-24 09:09:58.017: [ USRTHRD][1093232960] {0:0:2} [NetHAWork] thread started
2010-09-24 09:09:58.017: [ USRTHRD][1093232960] {0:0:2} Arp::sCreateSocket {
2010-09-24 09:09:58.017: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]start }
2010-09-24 09:09:58.018: [ USRTHRD][1143847232] {0:0:2} [NetHAWork] thread started
2010-09-24 09:09:58.018: [ USRTHRD][1143847232] {0:0:2} Arp::sCreateSocket {
2010-09-24 09:09:58.034: [ USRTHRD][1116531008] {0:0:2} Arp::sCreateSocket }
2010-09-24 09:09:58.034: [ USRTHRD][1116531008] {0:0:2} Starting Probe for ip 169.254.x.x
2010-09-24 09:09:58.034: [ USRTHRD][1116531008] {0:0:2} Transitioning to Probe State
2010-09-24 09:09:58.034: [ USRTHRD][1093232960] {0:0:2} Arp::sCreateSocket }
2010-09-24 09:09:58.035: [ USRTHRD][1093232960] {0:0:2} Starting Probe for ip 169.254.x.x
2010-09-24 09:09:58.035: [ USRTHRD][1093232960] {0:0:2} Transitioning to Probe State
2010-09-24 09:09:58.050: [ USRTHRD][1143847232] {0:0:2} Arp::sCreateSocket }
2010-09-24 09:09:58.050: [ USRTHRD][1143847232] {0:0:2} Starting Probe for ip 169.254.x.x
2010-09-24 09:09:58.050: [ USRTHRD][1143847232] {0:0:2} Transitioning to Probe State
2010-09-24 09:09:58.231: [ USRTHRD][1093232960] {0:0:2} Arp::sProbe {
2010-09-24 09:09:58.231: [ USRTHRD][1093232960] {0:0:2} Arp::sSend: sending type 1
2010-09-24 09:09:58.231: [ USRTHRD][1093232960] {0:0:2} Arp::sProbe }

2010-09-24 09:10:04.879: [ USRTHRD][1116531008] {0:0:2} Arp::sAnnounce {
2010-09-24 09:10:04.879: [ USRTHRD][1116531008] {0:0:2} Arp::sSend: sending type 1
2010-09-24 09:10:04.879: [ USRTHRD][1116531008] {0:0:2} Arp::sAnnounce }
2010-09-24 09:10:04.879: [ USRTHRD][1116531008] {0:0:2} Transitioning to Defend State
2010-09-24 09:10:04.879: [ USRTHRD][1116531008] {0:0:2} VipActions::startIp {
2010-09-24 09:10:04.879: [ USRTHRD][1116531008] {0:0:2} Adding 169.254.x.x on eth7:2
2010-09-24 09:10:04.880: [ USRTHRD][1116531008] {0:0:2} VipActions::startIp }
2010-09-24 09:10:04.880: [ USRTHRD][1116531008] {0:0:2} Assigned IP: 169.254.x.x on interface eth7

2010-09-24 09:10:05.150: [ USRTHRD][1143847232] {0:0:2} Arp::sAnnounce {
2010-09-24 09:10:05.150: [ USRTHRD][1143847232] {0:0:2} Arp::sSend: sending type 1
2010-09-24 09:10:05.150: [ USRTHRD][1143847232] {0:0:2} Arp::sAnnounce }
2010-09-24 09:10:05.150: [ USRTHRD][1143847232] {0:0:2} Transitioning to Defend State
2010-09-24 09:10:05.150: [ USRTHRD][1143847232] {0:0:2} VipActions::startIp {
2010-09-24 09:10:05.151: [ USRTHRD][1143847232] {0:0:2} Adding 169.254.x.x on eth1:3
2010-09-24 09:10:05.151: [ USRTHRD][1143847232] {0:0:2} VipActions::startIp }
2010-09-24 09:10:05.151: [ USRTHRD][1143847232] {0:0:2} Assigned IP: 169.254.x.x on interface eth1
2010-09-24 09:10:05.470: [ USRTHRD][1093232960] {0:0:2} Arp::sAnnounce {
2010-09-24 09:10:05.470: [ USRTHRD][1093232960] {0:0:2} Arp::sSend: sending type 1
2010-09-24 09:10:05.470: [ USRTHRD][1093232960] {0:0:2} Arp::sAnnounce }
2010-09-24 09:10:05.470: [ USRTHRD][1093232960] {0:0:2} Transitioning to Defend State
2010-09-24 09:10:05.470: [ USRTHRD][1093232960] {0:0:2} VipActions::startIp {
2010-09-24 09:10:05.471: [ USRTHRD][1093232960] {0:0:2} Adding 169.254.x.x on eth7:3
2010-09-24 09:10:05.471: [ USRTHRD][1093232960] {0:0:2} VipActions::startIp }
2010-09-24 09:10:05.471: [ USRTHRD][1093232960] {0:0:2} Assigned IP: 169.254.x.x on interface eth7
2010-09-24 09:10:06.047: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]stop {
2010-09-24 09:10:06.282: [ USRTHRD][1129138496] {0:0:2} [NetHAWork] thread stopping
2010-09-24 09:10:06.282: [ USRTHRD][1129138496] {0:0:2} Thread:[NetHAWork]isRunning is reset to false here
2010-09-24 09:10:06.282: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]stop }
2010-09-24 09:10:06.282: [ USRTHRD][1082325312] {0:0:2} VipActions::stopIp {
2010-09-24 09:10:06.282: [ USRTHRD][1082325312] {0:0:2} NetInterface::sStopIp {
2010-09-24 09:10:06.282: [ USRTHRD][1082325312] {0:0:2} Stopping ip ’169.254.x.x’, inf ’eth6’, mask ’10.11.x.x’
2010-09-24 09:10:06.288: [ USRTHRD][1082325312] {0:0:2} NetInterface::sStopIp }
2010-09-24 09:10:06.288: [ USRTHRD][1082325312] {0:0:2} VipActions::stopIp }
2010-09-24 09:10:06.288: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]stop {
2010-09-24 09:10:06.298: [ USRTHRD][1131239744] {0:0:2} [NetHAWork] thread stopping
2010-09-24 09:10:06.298: [ USRTHRD][1131239744] {0:0:2} Thread:[NetHAWork]isRunning is reset to false here
2010-09-24 09:10:06.298: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]stop }
2010-09-24 09:10:06.298: [ USRTHRD][1082325312] {0:0:2} VipActions::stopIp {

2010-09-24 09:10:06.298: [ USRTHRD][1082325312] {0:0:2} NetInterface::sStopIp {
2010-09-24 09:10:06.298: [ USRTHRD][1082325312] {0:0:2} Stopping ip ’169.254.x.x’, inf ’eth7’, mask ’10.12.x.x’
2010-09-24 09:10:06.299: [ USRTHRD][1082325312] {0:0:2} NetInterface::sStopIp }
2010-09-24 09:10:06.299: [ USRTHRD][1082325312] {0:0:2} VipActions::stopIp }
2010-09-24 09:10:06.299: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]stop {
2010-09-24 09:10:06.802: [ USRTHRD][1133340992] {0:0:2} [NetHAWork] thread stopping
2010-09-24 09:10:06.802: [ USRTHRD][1133340992] {0:0:2} Thread:[NetHAWork]isRunning is reset to false here
2010-09-24 09:10:06.802: [ USRTHRD][1082325312] {0:0:2} Thread:[NetHAWork]stop }
2010-09-24 09:10:06.802: [ USRTHRD][1082325312] {0:0:2} VipActions::stopIp {
2010-09-24 09:10:06.802: [ USRTHRD][1082325312] {0:0:2} NetInterface::sStopIp {
2010-09-24 09:10:06.802: [ USRTHRD][1082325312] {0:0:2} Stopping ip ’169.254.x.x’, inf ’eth1’, mask ’10.1.x.x’
2010-09-24 09:10:06.802: [ USRTHRD][1082325312] {0:0:2} NetInterface::sStopIp }
2010-09-24 09:10:06.802: [ USRTHRD][1082325312] {0:0:2} VipActions::stopIp }
2010-09-24 09:10:06.803: [ USRTHRD][1082325312] {0:0:2} USING HAIP[ 0 ]: eth7 - 169.254.112.x
2010-09-24 09:10:06.803: [ USRTHRD][1082325312] {0:0:2} USING HAIP[ 1 ]: eth1 - 169.254.178.x
2010-09-24 09:10:06.803: [ USRTHRD][1082325312] {0:0:2} USING HAIP[ 2 ]: eth7 - 169.254.244.x
2010-09-24 09:10:06.803: [ USRTHRD][1082325312] {0:0:2} USING HAIP[ 3 ]: eth1 - 169.254.30.x

Note: from above, even only NIC eth6 failed, there could be multiple virtual private IP movement among surviving NICs

ocssd.log

2010-09-24 09:09:58.314: [ GIPCNET][1089964352] gipcmodNetworkProcessSend: [network] failed send attempt endp 0xe1b9150 [0000000000000399] { gipcEndpoint : localAddr ’udp://10.11.x.x:60169’, remoteAddr ’’, numPend 5, numReady 1, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, flags 0x2, usrFlags 0x4000 }, req 0x2aaab00117f0 [00000000004b0cae] { gipcSendRequest : addr ’udp://10.11.x.x:41486’, data 0x2aaab0050be8, len 80, olen 0, parentEndp 0xe1b9150, ret gipcretEndpointNotAvailable (40), objFlags 0x0, reqFlags 0x2 }
2010-09-24 09:09:58.314: [ GIPCNET][1089964352] gipcmodNetworkProcessSend: slos op : ValidateSocket
2010-09-24 09:09:58.314: [ GIPCNET][1089964352] gipcmodNetworkProcessSend: slos dep : Invalid argument (22)
2010-09-24 09:09:58.314: [ GIPCNET][1089964352] gipcmodNetworkProcessSend: slos loc : address not
2010-09-24 09:09:58.314: [ GIPCNET][1089964352] gipcmodNetworkProcessSend: slos info: addr ’10.11.x.x:60169’, len 80, buf 0x2aaab0050be8, cookie 0x2aaab00117f0
2010-09-24 09:09:58.314: [GIPCXCPT][1089964352] gipcInternalSendSync: failed sync request, ret gipcretEndpointNotAvailable (40)
2010-09-24 09:09:58.314: [GIPCXCPT][1089964352] gipcSendSyncF [gipchaLowerInternalSend : gipchaLower.c : 755]: EXCEPTION[ ret gipcretEndpointNotAvailable (40) ] failed to send on endp 0xe1b9150 [0000000000000399] { gipcEndpoint : localAddr ’udp://10.11.x.x:60169’, remoteAddr ’’, numPend 5, numReady 0, numDone 0, numDead 0, numTransfer 0, objFlags 0x0, pidPeer 0, flags 0x2, usrFlags 0x4000 }, addr 0xe4e6d10 [00000000000007ed] { gipcAddress : name ’udp://10.11.x.x:41486’, objFlags 0x0, addrFlags 0x1 }, buf 0x2aaab0050be8, len 80, flags 0x0
2010-09-24 09:09:58.314: [GIPCHGEN][1089964352] gipchaInterfaceFail: marking interface failing 0xe2bd5f0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaaac2098e0, ip ’10.11.x.x:41486’, subnet ’10.11.0.128’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x6 }
2010-09-24 09:09:58.314: [GIPCHALO][1089964352] gipchaLowerInternalSend: failed to initiate send on interface 0xe2bd5f0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaaac2098e0, ip ’10.11.x.x:41486’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x86 }, hctx 0xde81d10 [0000000000000010] { gipchaContext : host ’<node1>’, name ’CSS_a2b2’, luid ’4f06f2aa-00000000’, numNode 1, numInf 3, usrFlags 0x0, flags 0x7 }
2010-09-24 09:09:58.326: [GIPCHGEN][1089964352] gipchaInterfaceDisable: disabling interface 0x2aaaac2098e0 { host ’’, haName ’CSS_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 1, flags 0x14d }
2010-09-24 09:09:58.326: [GIPCHGEN][1089964352] gipchaInterfaceDisable: disabling interface 0xe2bd5f0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaaac2098e0, ip ’10.11.x.x:41486’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x86 }
2010-09-24 09:09:58.327: [GIPCHALO][1089964352] gipchaLowerCleanInterfaces: performing cleanup of disabled interface 0xe2bd5f0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaaac2098e0, ip ’10.11.x.x:41486’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0xa6 }
2010-09-24 09:09:58.327: [GIPCHGEN][1089964352] gipchaInterfaceReset: resetting interface 0xe2bd5f0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaaac2098e0, ip ’10.11.x.x:41486’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0xa6 }
2010-09-24 09:09:58.338: [GIPCHDEM][1089964352] gipchaWorkerCleanInterface: performing cleanup of disabled interface 0x2aaaac2098e0 { host ’’, haName ’CSS_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x16d }
2010-09-24 09:09:58.338: [GIPCHTHR][1089964352] gipchaWorkerUpdateInterface: created remote interface for node ’<node2>’, haName ’CSS_a2b2’, inf ’udp://10.11.x.x:41486’
2010-09-24 09:09:58.338: [GIPCHGEN][1089964352] gipchaWorkerAttachInterface: Interface attached inf 0xe2bd5f0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaaac2014f0, ip ’10.11.x.x:41486’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x6 }
2010-09-24 09:10:00.454: [ CSSD][1108904256]clssnmSendingThread: sending status msg to all nodes

Note: from above, ocssd.bin won’t fail as long as there’s at least one private network adapter is working

L2. Log Sample When Private Network Adapter Restores

In a multiple private network adapter environment, if one of the failed adapters es restored:

ohasd.log

2010-09-24 09:14:30.962: [GIPCHGEN][1083025728]gipchaNodeAddInterface: adding interface information for inf 0x2aaaac1a53d0 { host ’’, haName ’CLSFRAME_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x41 }
2010-09-24 09:14:30.972: [GIPCHTHR][1138145600]gipchaWorkerUpdateInterface: created local bootstrap interface for node ’<node1>’, haName ’CLSFRAME_a2b2’, inf ’mcast://230.0.1.0:42424/10.11.x.x’
2010-09-24 09:14:30.972: [GIPCHTHR][1138145600]gipchaWorkerUpdateInterface: created local interface for node ’<node1>’, haName ’CLSFRAME_a2b2’, inf ’10.11.x.x:13235’

ocssd.log

2010-09-24 09:14:30.961: [GIPCHGEN][1091541312] gipchaNodeAddInterface: adding interface information for inf 0x2aaab005af00 { host ’’, haName ’CSS_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x41 }
2010-09-24 09:14:30.972: [GIPCHTHR][1089964352] gipchaWorkerUpdateInterface: created local bootstrap interface for node ’<node1>’, haName ’CSS_a2b2’, inf ’mcast://230.0.1.0:42424/10.11.x.x’
2010-09-24 09:14:30.972: [GIPCHTHR][1089964352] gipchaWorkerUpdateInterface: created local interface for node ’<node1>’, haName ’CSS_a2b2’, inf ’10.11.x.x:10884’
2010-09-24 09:14:30.972: [GIPCHGEN][1089964352] gipchaNodeAddInterface: adding interface information for inf 0x2aaab0035490 { host ’<node2>’, haName ’CSS_a2b2’, local (nil), ip ’10.21.x.x’, subnet ’10.12.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x42 }
2010-09-24 09:14:30.972: [GIPCHGEN][1089964352] gipchaNodeAddInterface: adding interface information for inf 0x2aaab00355c0 { host ’<node2>’, haName ’CSS_a2b2’, local (nil), ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x42 }
2010-09-24 09:14:30.972: [GIPCHTHR][1089964352] gipchaWorkerUpdateInterface: created remote interface for node ’<node2>’, haName ’CSS_a2b2’, inf ’mcast://230.0.1.0:42424/10.12.x.x’
2010-09-24 09:14:30.972: [GIPCHGEN][1089964352] gipchaWorkerAttachInterface: Interface attached inf 0x2aaab0035490 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaab005af00, ip ’10.12.x.x’, subnet ’10.12.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x46 }
2010-09-24 09:14:30.972: [GIPCHTHR][1089964352] gipchaWorkerUpdateInterface: created remote interface for node ’<node2>’, haName ’CSS_a2b2’, inf ’mcast://230.0.1.0:42424/10.11.x.x’
2010-09-24 09:14:30.972: [GIPCHGEN][1089964352] gipchaWorkerAttachInterface: Interface attached inf 0x2aaab00355c0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaab005af00, ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x46 }
2010-09-24 09:14:31.437: [GIPCHGEN][1089964352] gipchaInterfaceDisable: disabling interface 0x2aaab00355c0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaab005af00, ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x46 }
2010-09-24 09:14:31.437: [GIPCHALO][1089964352] gipchaLowerCleanInterfaces: performing cleanup of disabled interface 0x2aaab00355c0 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaab005af00, ip ’10.11.x.x’, subnet ’10.11.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x66 }
2010-09-24 09:14:31.446: [GIPCHGEN][1089964352] gipchaInterfaceDisable: disabling interface 0x2aaab0035490 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaab005af00, ip ’10.12.x.x’, subnet ’10.12.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x46 }
2010-09-24 09:14:31.446: [GIPCHALO][1089964352] gipchaLowerCleanInterfaces: performing cleanup of disabled interface 0x2aaab0035490 { host ’<node2>’, haName ’CSS_a2b2’, local 0x2aaab005af00, ip ’10.12.x.x’, subnet ’10.12.x.x’, mask ’255.255.255.128’, numRef 0, numFail 0, flags 0x66 }

以上就是关于帮gg修改器没有root怎么办_gg修改器没root能用么的全部内容,感谢大家的浏览观看,如果你喜欢本站的文章可以CTRL+D收藏哦。

相关文章

热门下载

大家还在搜