DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
@ 2019-11-27 11:37 Greg O'Rawe
  2019-11-30 15:54 ` Laurent Dumont
  0 siblings, 1 reply; 5+ messages in thread
From: Greg O'Rawe @ 2019-11-27 11:37 UTC (permalink / raw)
  To: users

I have the following setup:

*         Virtual environment with Openstack with Intel X520 NIC

*         Hypervisor using ixgbe driver

*         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat Linux 7.6 running VPP and DPDK 17.11.4

*         VM interfaces are bonded in active-standby mode on ingress and egress
In normal state everything is fine, the bond interfaces are operational. However when one of the physical interfaces on the hypervisor is brought down then failover to the standby does not work.

The second interface in each bond does become primary but original primary is still reported as UP by VPP. The device stats reported by VPP change to around maximum values and traffic no longer works through the bond interfaces:

             Name                Idx   Link  Hardware
BondEthernet0                      5     up   Slave-Idx: 1 2
  Ethernet address fa:16:3e:20:2c:ae
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934243
    tx bytes ok                                 137438924646
    rx frames ok                                  8589849574
    rx bytes ok                                 137433171720
    extended stats:
      rx good packets                             8589849574
      tx good packets                             8589934243
      rx good bytes                             137433171720
      tx good bytes                             137438924646

BondEthernet1                      6     up   Slave-Idx: 3 4
  Ethernet address fa:16:3e:f2:3c:af
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934273
    tx bytes ok                                 137438926918
    rx frames ok                                  8589849579
    rx bytes ok                                 137433172132
    extended stats:
      rx good packets                             8589849579
      tx good packets                             8589934273
      rx good bytes                             137433172132
     tx good bytes                             137438926918

device_0/6/0                       1    slave device_0/6/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966950
    tx bytes ok                                  68719448136
    rx frames ok                                  4294882284
    rx bytes ok                                  68713695344

device_0/7/0                       2    slave device_0/7/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

device_0/8/0                       3    slave device_0/8/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966980
    tx bytes ok                                  68719450408
    rx frames ok                                  4294882289
    rx bytes ok                                  68713695756

device_0/9/0                       4    slave device_0/9/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

There are no specific errors reported in the /var/log/messages files on either the VM or the hypervisor machines.

Any ideas on this issue? Is there a configuration problem, or possibly a change in a later DPDK version which might be relevant?

Thanks

Greg O'Rawe




This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
  2019-11-27 11:37 [dpdk-users] Failover not working on X520 NIC with ixgbevf driver Greg O'Rawe
@ 2019-11-30 15:54 ` Laurent Dumont
  2019-12-03 10:27   ` Greg O'Rawe
  0 siblings, 1 reply; 5+ messages in thread
From: Laurent Dumont @ 2019-11-30 15:54 UTC (permalink / raw)
  To: Greg O'Rawe; +Cc: users

Can you show the VF settings on the hypervisor? "ip link show
$SRIOV_INTERFACE_NAME"?

We saw similar issue with X710 where the physical state wasn't properly
passed from the actual PF to the VM VF. That meant that failover could not
happen since the VM thought the link was still active. We had to change the
"link-state" parameter on the two VF used by the VM.

That said, it was without VPP but a VM with DPDK enabled.

On Thu, Nov 28, 2019 at 6:52 PM Greg O'Rawe <greg.orawe@enea.com> wrote:

> I have the following setup:
>
> *         Virtual environment with Openstack with Intel X520 NIC
>
> *         Hypervisor using ixgbe driver
>
> *         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat
> Linux 7.6 running VPP and DPDK 17.11.4
>
> *         VM interfaces are bonded in active-standby mode on ingress and
> egress
> In normal state everything is fine, the bond interfaces are operational.
> However when one of the physical interfaces on the hypervisor is brought
> down then failover to the standby does not work.
>
> The second interface in each bond does become primary but original primary
> is still reported as UP by VPP. The device stats reported by VPP change to
> around maximum values and traffic no longer works through the bond
> interfaces:
>
>              Name                Idx   Link  Hardware
> BondEthernet0                      5     up   Slave-Idx: 1 2
>   Ethernet address fa:16:3e:20:2c:ae
>   Ethernet Bonding
>     carrier up full duplex speed 1000 mtu 1500
>     Mode 1
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  8589934243
>     tx bytes ok                                 137438924646
>     rx frames ok                                  8589849574
>     rx bytes ok                                 137433171720
>     extended stats:
>       rx good packets                             8589849574
>       tx good packets                             8589934243
>       rx good bytes                             137433171720
>       tx good bytes                             137438924646
>
> BondEthernet1                      6     up   Slave-Idx: 3 4
>   Ethernet address fa:16:3e:f2:3c:af
>   Ethernet Bonding
>     carrier up full duplex speed 1000 mtu 1500
>     Mode 1
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  8589934273
>     tx bytes ok                                 137438926918
>     rx frames ok                                  8589849579
>     rx bytes ok                                 137433172132
>     extended stats:
>       rx good packets                             8589849579
>       tx good packets                             8589934273
>       rx good bytes                             137433172132
>      tx good bytes                             137438926918
>
> device_0/6/0                       1    slave device_0/6/0
>   Ethernet address fa:16:3e:20:2c:ae
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State StandBy
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294966950
>     tx bytes ok                                  68719448136
>     rx frames ok                                  4294882284
>     rx bytes ok                                  68713695344
>
> device_0/7/0                       2    slave device_0/7/0
>   Ethernet address fa:16:3e:20:2c:ae
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State Primary
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294967293
>     tx bytes ok                                  68719476510
>     rx frames ok                                  4294967290
>     rx bytes ok                                  68719476376
>
> device_0/8/0                       3    slave device_0/8/0
>   Ethernet address fa:16:3e:f2:3c:af
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State StandBy
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294966980
>     tx bytes ok                                  68719450408
>     rx frames ok                                  4294882289
>     rx bytes ok                                  68713695756
>
> device_0/9/0                       4    slave device_0/9/0
>   Ethernet address fa:16:3e:f2:3c:af
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State Primary
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294967293
>     tx bytes ok                                  68719476510
>     rx frames ok                                  4294967290
>     rx bytes ok                                  68719476376
>
> There are no specific errors reported in the /var/log/messages files on
> either the VM or the hypervisor machines.
>
> Any ideas on this issue? Is there a configuration problem, or possibly a
> change in a later DPDK version which might be relevant?
>
> Thanks
>
> Greg O'Rawe
>
>
>
>
> This message, including attachments, is CONFIDENTIAL. It may also be
> privileged or otherwise protected by law. If you received this email by
> mistake please let us know by reply and then delete it from your system;
> you should not copy it or disclose its contents to anyone. All messages
> sent to and from Enea may be monitored to ensure compliance with internal
> policies and to protect our business. Emails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, a mended, lost or
> destroyed, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of email transmission. Anyone who communicates with
> us by email accepts these risks.
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
  2019-11-30 15:54 ` Laurent Dumont
@ 2019-12-03 10:27   ` Greg O'Rawe
  2019-12-05  0:30     ` Laurent Dumont
  0 siblings, 1 reply; 5+ messages in thread
From: Greg O'Rawe @ 2019-12-03 10:27 UTC (permalink / raw)
  To: Laurent Dumont; +Cc: users

Hi,

Thanks for the reply.

Here are the settings on the hypervisor – the link-state is “auto”:

ip link show ens1f1
172: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether 90:e2:ba:49:b2:29 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 3 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 4 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 5 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 8 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 9 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 10 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 11 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 12 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 13 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 14 MAC fa:16:3e:db:90:5a, vlan 412, spoof checking on, link-state auto, trust on, query_rss off
    vf 15 MAC fa:16:3e:bd:72:64, vlan 411, spoof checking on, link-state auto, trust on, query_rss off

Thanks
Greg


From: Laurent Dumont <laurentfdumont@gmail.com>
Sent: 30 November 2019 15:55
To: Greg O'Rawe <greg.orawe@enea.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver

Can you show the VF settings on the hypervisor? "ip link show $SRIOV_INTERFACE_NAME"?

We saw similar issue with X710 where the physical state wasn't properly passed from the actual PF to the VM VF. That meant that failover could not happen since the VM thought the link was still active. We had to change the "link-state" parameter on the two VF used by the VM.

That said, it was without VPP but a VM with DPDK enabled.

On Thu, Nov 28, 2019 at 6:52 PM Greg O'Rawe <greg.orawe@enea.com<mailto:greg.orawe@enea.com>> wrote:
I have the following setup:

*         Virtual environment with Openstack with Intel X520 NIC

*         Hypervisor using ixgbe driver

*         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat Linux 7.6 running VPP and DPDK 17.11.4

*         VM interfaces are bonded in active-standby mode on ingress and egress
In normal state everything is fine, the bond interfaces are operational. However when one of the physical interfaces on the hypervisor is brought down then failover to the standby does not work.

The second interface in each bond does become primary but original primary is still reported as UP by VPP. The device stats reported by VPP change to around maximum values and traffic no longer works through the bond interfaces:

             Name                Idx   Link  Hardware
BondEthernet0                      5     up   Slave-Idx: 1 2
  Ethernet address fa:16:3e:20:2c:ae
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934243
    tx bytes ok                                 137438924646
    rx frames ok                                  8589849574
    rx bytes ok                                 137433171720
    extended stats:
      rx good packets                             8589849574
      tx good packets                             8589934243
      rx good bytes                             137433171720
      tx good bytes                             137438924646

BondEthernet1                      6     up   Slave-Idx: 3 4
  Ethernet address fa:16:3e:f2:3c:af
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934273
    tx bytes ok                                 137438926918
    rx frames ok                                  8589849579
    rx bytes ok                                 137433172132
    extended stats:
      rx good packets                             8589849579
      tx good packets                             8589934273
      rx good bytes                             137433172132
     tx good bytes                             137438926918

device_0/6/0                       1    slave device_0/6/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966950
    tx bytes ok                                  68719448136
    rx frames ok                                  4294882284
    rx bytes ok                                  68713695344

device_0/7/0                       2    slave device_0/7/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

device_0/8/0                       3    slave device_0/8/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966980
    tx bytes ok                                  68719450408
    rx frames ok                                  4294882289
    rx bytes ok                                  68713695756

device_0/9/0                       4    slave device_0/9/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

There are no specific errors reported in the /var/log/messages files on either the VM or the hypervisor machines.

Any ideas on this issue? Is there a configuration problem, or possibly a change in a later DPDK version which might be relevant?

Thanks

Greg O'Rawe




This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
  2019-12-03 10:27   ` Greg O'Rawe
@ 2019-12-05  0:30     ` Laurent Dumont
  2019-12-05 10:12       ` Greg O'Rawe
  0 siblings, 1 reply; 5+ messages in thread
From: Laurent Dumont @ 2019-12-05  0:30 UTC (permalink / raw)
  To: Greg O'Rawe; +Cc: users

These look okay to me. Sorry, I don't think I saw the same behavior :(

On Tue, Dec 3, 2019 at 5:27 AM Greg O'Rawe <greg.orawe@enea.com> wrote:

> Hi,
>
>
>
> Thanks for the reply.
>
>
>
> Here are the settings on the hypervisor – the link-state is “auto”:
>
>
>
> ip link show ens1f1
>
> 172: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovs-system state UP mode DEFAULT group default qlen 1000
>
>     link/ether 90:e2:ba:49:b2:29 brd ff:ff:ff:ff:ff:ff
>
>     vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 3 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 4 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 5 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 8 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 9 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
>     vf 10 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
>     vf 11 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
>     vf 12 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
>     vf 13 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
>     vf 14 MAC fa:16:3e:db:90:5a, vlan 412, spoof checking on, link-state
> auto, trust on, query_rss off
>
>     vf 15 MAC fa:16:3e:bd:72:64, vlan 411, spoof checking on, link-state
> auto, trust on, query_rss off
>
>
>
> Thanks
>
> Greg
>
>
>
>
>
> *From:* Laurent Dumont <laurentfdumont@gmail.com>
> *Sent:* 30 November 2019 15:55
> *To:* Greg O'Rawe <greg.orawe@enea.com>
> *Cc:* users@dpdk.org
> *Subject:* Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf
> driver
>
>
>
> Can you show the VF settings on the hypervisor? "ip link show
> $SRIOV_INTERFACE_NAME"?
>
>
>
> We saw similar issue with X710 where the physical state wasn't properly
> passed from the actual PF to the VM VF. That meant that failover could not
> happen since the VM thought the link was still active. We had to change the
> "link-state" parameter on the two VF used by the VM.
>
>
>
> That said, it was without VPP but a VM with DPDK enabled.
>
>
>
> On Thu, Nov 28, 2019 at 6:52 PM Greg O'Rawe <greg.orawe@enea.com> wrote:
>
> I have the following setup:
>
> *         Virtual environment with Openstack with Intel X520 NIC
>
> *         Hypervisor using ixgbe driver
>
> *         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat
> Linux 7.6 running VPP and DPDK 17.11.4
>
> *         VM interfaces are bonded in active-standby mode on ingress and
> egress
> In normal state everything is fine, the bond interfaces are operational.
> However when one of the physical interfaces on the hypervisor is brought
> down then failover to the standby does not work.
>
> The second interface in each bond does become primary but original primary
> is still reported as UP by VPP. The device stats reported by VPP change to
> around maximum values and traffic no longer works through the bond
> interfaces:
>
>              Name                Idx   Link  Hardware
> BondEthernet0                      5     up   Slave-Idx: 1 2
>   Ethernet address fa:16:3e:20:2c:ae
>   Ethernet Bonding
>     carrier up full duplex speed 1000 mtu 1500
>     Mode 1
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  8589934243
>     tx bytes ok                                 137438924646
>     rx frames ok                                  8589849574
>     rx bytes ok                                 137433171720
>     extended stats:
>       rx good packets                             8589849574
>       tx good packets                             8589934243
>       rx good bytes                             137433171720
>       tx good bytes                             137438924646
>
> BondEthernet1                      6     up   Slave-Idx: 3 4
>   Ethernet address fa:16:3e:f2:3c:af
>   Ethernet Bonding
>     carrier up full duplex speed 1000 mtu 1500
>     Mode 1
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  8589934273
>     tx bytes ok                                 137438926918
>     rx frames ok                                  8589849579
>     rx bytes ok                                 137433172132
>     extended stats:
>       rx good packets                             8589849579
>       tx good packets                             8589934273
>       rx good bytes                             137433172132
>      tx good bytes                             137438926918
>
> device_0/6/0                       1    slave device_0/6/0
>   Ethernet address fa:16:3e:20:2c:ae
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State StandBy
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294966950
>     tx bytes ok                                  68719448136
>     rx frames ok                                  4294882284
>     rx bytes ok                                  68713695344
>
> device_0/7/0                       2    slave device_0/7/0
>   Ethernet address fa:16:3e:20:2c:ae
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State Primary
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294967293
>     tx bytes ok                                  68719476510
>     rx frames ok                                  4294967290
>     rx bytes ok                                  68719476376
>
> device_0/8/0                       3    slave device_0/8/0
>   Ethernet address fa:16:3e:f2:3c:af
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State StandBy
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294966980
>     tx bytes ok                                  68719450408
>     rx frames ok                                  4294882289
>     rx bytes ok                                  68713695756
>
> device_0/9/0                       4    slave device_0/9/0
>   Ethernet address fa:16:3e:f2:3c:af
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State Primary
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294967293
>     tx bytes ok                                  68719476510
>     rx frames ok                                  4294967290
>     rx bytes ok                                  68719476376
>
> There are no specific errors reported in the /var/log/messages files on
> either the VM or the hypervisor machines.
>
> Any ideas on this issue? Is there a configuration problem, or possibly a
> change in a later DPDK version which might be relevant?
>
> Thanks
>
> Greg O'Rawe
>
>
>
>
> This message, including attachments, is CONFIDENTIAL. It may also be
> privileged or otherwise protected by law. If you received this email by
> mistake please let us know by reply and then delete it from your system;
> you should not copy it or disclose its contents to anyone. All messages
> sent to and from Enea may be monitored to ensure compliance with internal
> policies and to protect our business. Emails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, a mended, lost or
> destroyed, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of email transmission. Anyone who communicates with
> us by email accepts these risks.
>
>
> This message, including attachments, is CONFIDENTIAL. It may also be
> privileged or otherwise protected by law. If you received this email by
> mistake please let us know by reply and then delete it from your system;
> you should not copy it or disclose its contents to anyone. All messages
> sent to and from Enea may be monitored to ensure compliance with internal
> policies and to protect our business. Emails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, a mended, lost or
> destroyed, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of email transmission. Anyone who communicates with
> us by email accepts these risks.
>

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
  2019-12-05  0:30     ` Laurent Dumont
@ 2019-12-05 10:12       ` Greg O'Rawe
  0 siblings, 0 replies; 5+ messages in thread
From: Greg O'Rawe @ 2019-12-05 10:12 UTC (permalink / raw)
  To: Laurent Dumont; +Cc: users

Ok thanks for clarifying

From: Laurent Dumont <laurentfdumont@gmail.com>
Sent: 05 December 2019 00:30
To: Greg O'Rawe <greg.orawe@enea.com>
Cc: users@dpdk.org
Subject: Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver

These look okay to me. Sorry, I don't think I saw the same behavior :(

On Tue, Dec 3, 2019 at 5:27 AM Greg O'Rawe <greg.orawe@enea.com<mailto:greg.orawe@enea.com>> wrote:
Hi,

Thanks for the reply.

Here are the settings on the hypervisor – the link-state is “auto”:

ip link show ens1f1
172: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master ovs-system state UP mode DEFAULT group default qlen 1000
    link/ether 90:e2:ba:49:b2:29 brd ff:ff:ff:ff:ff:ff
    vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 3 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 4 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 5 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 8 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 9 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 10 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 11 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 12 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 13 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust on, query_rss off
    vf 14 MAC fa:16:3e:db:90:5a, vlan 412, spoof checking on, link-state auto, trust on, query_rss off
    vf 15 MAC fa:16:3e:bd:72:64, vlan 411, spoof checking on, link-state auto, trust on, query_rss off

Thanks
Greg


From: Laurent Dumont <laurentfdumont@gmail.com<mailto:laurentfdumont@gmail.com>>
Sent: 30 November 2019 15:55
To: Greg O'Rawe <greg.orawe@enea.com<mailto:greg.orawe@enea.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>
Subject: Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver

Can you show the VF settings on the hypervisor? "ip link show $SRIOV_INTERFACE_NAME"?

We saw similar issue with X710 where the physical state wasn't properly passed from the actual PF to the VM VF. That meant that failover could not happen since the VM thought the link was still active. We had to change the "link-state" parameter on the two VF used by the VM.

That said, it was without VPP but a VM with DPDK enabled.

On Thu, Nov 28, 2019 at 6:52 PM Greg O'Rawe <greg.orawe@enea.com<mailto:greg.orawe@enea.com>> wrote:
I have the following setup:

*         Virtual environment with Openstack with Intel X520 NIC

*         Hypervisor using ixgbe driver

*         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat Linux 7.6 running VPP and DPDK 17.11.4

*         VM interfaces are bonded in active-standby mode on ingress and egress
In normal state everything is fine, the bond interfaces are operational. However when one of the physical interfaces on the hypervisor is brought down then failover to the standby does not work.

The second interface in each bond does become primary but original primary is still reported as UP by VPP. The device stats reported by VPP change to around maximum values and traffic no longer works through the bond interfaces:

             Name                Idx   Link  Hardware
BondEthernet0                      5     up   Slave-Idx: 1 2
  Ethernet address fa:16:3e:20:2c:ae
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934243
    tx bytes ok                                 137438924646
    rx frames ok                                  8589849574
    rx bytes ok                                 137433171720
    extended stats:
      rx good packets                             8589849574
      tx good packets                             8589934243
      rx good bytes                             137433171720
      tx good bytes                             137438924646

BondEthernet1                      6     up   Slave-Idx: 3 4
  Ethernet address fa:16:3e:f2:3c:af
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934273
    tx bytes ok                                 137438926918
    rx frames ok                                  8589849579
    rx bytes ok                                 137433172132
    extended stats:
      rx good packets                             8589849579
      tx good packets                             8589934273
      rx good bytes                             137433172132
     tx good bytes                             137438926918

device_0/6/0                       1    slave device_0/6/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966950
    tx bytes ok                                  68719448136
    rx frames ok                                  4294882284
    rx bytes ok                                  68713695344

device_0/7/0                       2    slave device_0/7/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

device_0/8/0                       3    slave device_0/8/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966980
    tx bytes ok                                  68719450408
    rx frames ok                                  4294882289
    rx bytes ok                                  68713695756

device_0/9/0                       4    slave device_0/9/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

There are no specific errors reported in the /var/log/messages files on either the VM or the hypervisor machines.

Any ideas on this issue? Is there a configuration problem, or possibly a change in a later DPDK version which might be relevant?

Thanks

Greg O'Rawe




This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, back to index

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-27 11:37 [dpdk-users] Failover not working on X520 NIC with ixgbevf driver Greg O'Rawe
2019-11-30 15:54 ` Laurent Dumont
2019-12-03 10:27   ` Greg O'Rawe
2019-12-05  0:30     ` Laurent Dumont
2019-12-05 10:12       ` Greg O'Rawe

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox