DPDK usage discussions
 help / color / mirror / Atom feed
From: Greg O'Rawe <greg.orawe@enea.com>
To: "users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
Date: Wed, 27 Nov 2019 11:37:59 +0000	[thread overview]
Message-ID: <AM6PR03MB484088D12FBE07A5A72ECEC1ED440@AM6PR03MB4840.eurprd03.prod.outlook.com> (raw)

I have the following setup:

*         Virtual environment with Openstack with Intel X520 NIC

*         Hypervisor using ixgbe driver

*         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat Linux 7.6 running VPP and DPDK 17.11.4

*         VM interfaces are bonded in active-standby mode on ingress and egress
In normal state everything is fine, the bond interfaces are operational. However when one of the physical interfaces on the hypervisor is brought down then failover to the standby does not work.

The second interface in each bond does become primary but original primary is still reported as UP by VPP. The device stats reported by VPP change to around maximum values and traffic no longer works through the bond interfaces:

             Name                Idx   Link  Hardware
BondEthernet0                      5     up   Slave-Idx: 1 2
  Ethernet address fa:16:3e:20:2c:ae
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934243
    tx bytes ok                                 137438924646
    rx frames ok                                  8589849574
    rx bytes ok                                 137433171720
    extended stats:
      rx good packets                             8589849574
      tx good packets                             8589934243
      rx good bytes                             137433171720
      tx good bytes                             137438924646

BondEthernet1                      6     up   Slave-Idx: 3 4
  Ethernet address fa:16:3e:f2:3c:af
  Ethernet Bonding
    carrier up full duplex speed 1000 mtu 1500
    Mode 1
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  8589934273
    tx bytes ok                                 137438926918
    rx frames ok                                  8589849579
    rx bytes ok                                 137433172132
    extended stats:
      rx good packets                             8589849579
      tx good packets                             8589934273
      rx good bytes                             137433172132
     tx good bytes                             137438926918

device_0/6/0                       1    slave device_0/6/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966950
    tx bytes ok                                  68719448136
    rx frames ok                                  4294882284
    rx bytes ok                                  68713695344

device_0/7/0                       2    slave device_0/7/0
  Ethernet address fa:16:3e:20:2c:ae
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

device_0/8/0                       3    slave device_0/8/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State StandBy
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294966980
    tx bytes ok                                  68719450408
    rx frames ok                                  4294882289
    rx bytes ok                                  68713695756

device_0/9/0                       4    slave device_0/9/0
  Ethernet address fa:16:3e:f2:3c:af
  Intel 82599 VF
    carrier up full duplex speed 1000 mtu 1500
    Slave UP
    Slave State Primary
    rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
    cpu socket 0

    tx frames ok                                  4294967293
    tx bytes ok                                  68719476510
    rx frames ok                                  4294967290
    rx bytes ok                                  68719476376

There are no specific errors reported in the /var/log/messages files on either the VM or the hypervisor machines.

Any ideas on this issue? Is there a configuration problem, or possibly a change in a later DPDK version which might be relevant?

Thanks

Greg O'Rawe




This message, including attachments, is CONFIDENTIAL. It may also be privileged or otherwise protected by law. If you received this email by mistake please let us know by reply and then delete it from your system; you should not copy it or disclose its contents to anyone. All messages sent to and from Enea may be monitored to ensure compliance with internal policies and to protect our business. Emails are not secure and cannot be guaranteed to be error free as they can be intercepted, a mended, lost or destroyed, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of email transmission. Anyone who communicates with us by email accepts these risks.

             reply	other threads:[~2019-11-28 23:52 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-27 11:37 Greg O'Rawe [this message]
2019-11-30 15:54 ` Laurent Dumont
2019-12-03 10:27   ` Greg O'Rawe
2019-12-05  0:30     ` Laurent Dumont
2019-12-05 10:12       ` Greg O'Rawe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM6PR03MB484088D12FBE07A5A72ECEC1ED440@AM6PR03MB4840.eurprd03.prod.outlook.com \
    --to=greg.orawe@enea.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).