我正在配置vPC
端到端直到我的服务器获得更多带宽,以下是我的场景
我看到一件奇怪的事情,那就是我在 N3k 交换机上配置了 vPC,然后我将 Linux 服务器配置为链路聚合802.3ad
绑定,然后我重新启动服务器到目前为止一切正常,我可以看到正确的绑定配置/proc/net/bonding/bind0
,我的服务器也开始 ping 但我发现我在 ping 中丢失了数据包,后来我在交换机上发现它的显示vpc
已关闭,但想知道我是如何获得 ping 的?
N3k# show vpc 1
vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
134 Po1 down* success success -
后来我做了shut
与no shut
上Port-Channel 1
并立即调出VPC
N3k# show vpc 1
vPC status
----------------------------------------------------------------------
id Port Status Consistency Reason Active vlans
-- ---- ------ ----------- ------ ------------
131 Po1 up success success 10,20,30
我的 VPC 域配置
vpc domain 204
peer-switch
role priority 10
peer-keepalive destination 10.29.0.51 source 10.29.0.50
auto-recovery
ip arp synchronize
这是我的 vPC 配置
interface Ethernet1/1
switchport mode trunk
switchport trunk allowed vlan 10,20,30
spanning-tree port type edge trunk
spanning-tree bpduguard enable
speed 10000
channel-group 1 mode active
interface port-channel1
switchport mode trunk
switchport trunk allowed vlan 10,20,30
speed 10000
vpc 1
这是我的 Linux 服务器配置
ifcfg-bond0
NAME=bond0
DEVICE=bond0
BOOTPROTO=none
ONBOOT=yes
BONDING_OPTS="mode=4 miimon=500 downdelay=1000 lacp_rate=1"
NM_CONTROLLED=no
ifcfg-bond0.10
NAME=bond0.10
DEVICE=bond0.10
BOOTPROTO=dhcp
VLAN=yes
ONPARENT=yes
NM_CONTROLLED=no
问题:
- 即使
vpc
在交换机上关闭服务器如何ping通? - 为什么我需要
shut/no shut
vpc 来启动 vpc?这是正常的吗? - 我在同一个 vpc 集群上安装了 30 台服务器,并且都有同样的问题,每次我必须去交换机并需要做端口通道
shut/no shut
- 我在这里错过了什么吗?
更新 - 1
为了测试我重新启动服务器并发现服务器已启动但交换机 vpc 已关闭并且在交换机上我看到以下日志,这是一个奇怪的问题。
sw1# show logging | grep "Ethernet1/37"
2018 Jul 9 14:28:13 sw1 %ETHPORT-5-IF_DOWN_INITIALIZING: Interface Ethernet1/37 is down (Initializing)
2018 Jul 9 14:28:13 sw1 %ETH_PORT_CHANNEL-5-PORT_INDIVIDUAL_DOWN: individual port Ethernet1/37 is down
2018 Jul 9 14:28:15 sw1 %ETHPORT-5-IF_DOWN_INITIALIZING: Interface Ethernet1/37 is down (Initializing)
2018 Jul 9 14:28:18 sw1 %ETHPORT-5-SPEED: Interface Ethernet1/37, operational speed changed to 10 Gbps
2018 Jul 9 14:28:18 sw1 %ETHPORT-5-IF_DUPLEX: Interface Ethernet1/37, operational duplex mode changed to Full
2018 Jul 9 14:28:18 sw1 %ETHPORT-5-IF_RX_FLOW_CONTROL: Interface Ethernet1/37, operational Receive Flow Control state changed to off
2018 Jul 9 14:28:18 sw1 %ETHPORT-5-IF_TX_FLOW_CONTROL: Interface Ethernet1/37, operational Transmit Flow Control state changed to off
2018 Jul 9 14:28:28 sw1 %ETH_PORT_CHANNEL-4-PORT_INDIVIDUAL: port Ethernet1/37 is operationally individual
2018 Jul 9 14:28:28 sw1 %ETHPORT-5-IF_UP: Interface Ethernet1/37 is up in mode trunk
服务器端我看到以下错误
[root@Linux ~]# tail -f /var/log/messages
Jul 9 10:45:47 s_sys@linux kernel: : [ 321.299960] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
Jul 9 10:46:11 s_sys@linux kernel: : [ 345.300288] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond
我看到的 Linux 服务器端如下
[root@Linux ~]# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 500
Up Delay (ms): 0
Down Delay (ms): 1000
802.3ad info
LACP rate: fast
Min links: 0
Aggregator selection policy (ad_select): stable
System priority: 65535
System MAC address: 6c:3b:e5:b0:7a:40
Active Aggregator Info:
Aggregator ID: 2
Number of ports: 2
Actor Key: 13
Partner Key: 32883
Partner Mac Address: 00:23:04:ee:be:cc