DPDK usage discussions
 help / color / mirror / Atom feed
From: Laurent Dumont <laurentfdumont@gmail.com>
To: "Greg O'Rawe" <greg.orawe@enea.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
Date: Sat, 30 Nov 2019 10:54:53 -0500	[thread overview]
Message-ID: <CAOAKi8z7V2Xe0Cx=3MPd_MavGvk957W+=AYJHRTHp5OjFC6Ktw@mail.gmail.com> (raw)
In-Reply-To: <AM6PR03MB484088D12FBE07A5A72ECEC1ED440@AM6PR03MB4840.eurprd03.prod.outlook.com>

Can you show the VF settings on the hypervisor? "ip link show
$SRIOV_INTERFACE_NAME"?

We saw similar issue with X710 where the physical state wasn't properly
passed from the actual PF to the VM VF. That meant that failover could not
happen since the VM thought the link was still active. We had to change the
"link-state" parameter on the two VF used by the VM.

That said, it was without VPP but a VM with DPDK enabled.

On Thu, Nov 28, 2019 at 6:52 PM Greg O'Rawe <greg.orawe@enea.com> wrote:

> I have the following setup:
>
> *         Virtual environment with Openstack with Intel X520 NIC
>
> *         Hypervisor using ixgbe driver
>
> *         Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat
> Linux 7.6 running VPP and DPDK 17.11.4
>
> *         VM interfaces are bonded in active-standby mode on ingress and
> egress
> In normal state everything is fine, the bond interfaces are operational.
> However when one of the physical interfaces on the hypervisor is brought
> down then failover to the standby does not work.
>
> The second interface in each bond does become primary but original primary
> is still reported as UP by VPP. The device stats reported by VPP change to
> around maximum values and traffic no longer works through the bond
> interfaces:
>
>              Name                Idx   Link  Hardware
> BondEthernet0                      5     up   Slave-Idx: 1 2
>   Ethernet address fa:16:3e:20:2c:ae
>   Ethernet Bonding
>     carrier up full duplex speed 1000 mtu 1500
>     Mode 1
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  8589934243
>     tx bytes ok                                 137438924646
>     rx frames ok                                  8589849574
>     rx bytes ok                                 137433171720
>     extended stats:
>       rx good packets                             8589849574
>       tx good packets                             8589934243
>       rx good bytes                             137433171720
>       tx good bytes                             137438924646
>
> BondEthernet1                      6     up   Slave-Idx: 3 4
>   Ethernet address fa:16:3e:f2:3c:af
>   Ethernet Bonding
>     carrier up full duplex speed 1000 mtu 1500
>     Mode 1
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  8589934273
>     tx bytes ok                                 137438926918
>     rx frames ok                                  8589849579
>     rx bytes ok                                 137433172132
>     extended stats:
>       rx good packets                             8589849579
>       tx good packets                             8589934273
>       rx good bytes                             137433172132
>      tx good bytes                             137438926918
>
> device_0/6/0                       1    slave device_0/6/0
>   Ethernet address fa:16:3e:20:2c:ae
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State StandBy
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294966950
>     tx bytes ok                                  68719448136
>     rx frames ok                                  4294882284
>     rx bytes ok                                  68713695344
>
> device_0/7/0                       2    slave device_0/7/0
>   Ethernet address fa:16:3e:20:2c:ae
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State Primary
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294967293
>     tx bytes ok                                  68719476510
>     rx frames ok                                  4294967290
>     rx bytes ok                                  68719476376
>
> device_0/8/0                       3    slave device_0/8/0
>   Ethernet address fa:16:3e:f2:3c:af
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State StandBy
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294966980
>     tx bytes ok                                  68719450408
>     rx frames ok                                  4294882289
>     rx bytes ok                                  68713695756
>
> device_0/9/0                       4    slave device_0/9/0
>   Ethernet address fa:16:3e:f2:3c:af
>   Intel 82599 VF
>     carrier up full duplex speed 1000 mtu 1500
>     Slave UP
>     Slave State Primary
>     rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
>     cpu socket 0
>
>     tx frames ok                                  4294967293
>     tx bytes ok                                  68719476510
>     rx frames ok                                  4294967290
>     rx bytes ok                                  68719476376
>
> There are no specific errors reported in the /var/log/messages files on
> either the VM or the hypervisor machines.
>
> Any ideas on this issue? Is there a configuration problem, or possibly a
> change in a later DPDK version which might be relevant?
>
> Thanks
>
> Greg O'Rawe
>
>
>
>
> This message, including attachments, is CONFIDENTIAL. It may also be
> privileged or otherwise protected by law. If you received this email by
> mistake please let us know by reply and then delete it from your system;
> you should not copy it or disclose its contents to anyone. All messages
> sent to and from Enea may be monitored to ensure compliance with internal
> policies and to protect our business. Emails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, a mended, lost or
> destroyed, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of email transmission. Anyone who communicates with
> us by email accepts these risks.
>

  reply	other threads:[~2019-11-30 15:55 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-27 11:37 Greg O'Rawe
2019-11-30 15:54 ` Laurent Dumont [this message]
2019-12-03 10:27   ` Greg O'Rawe
2019-12-05  0:30     ` Laurent Dumont
2019-12-05 10:12       ` Greg O'Rawe

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAOAKi8z7V2Xe0Cx=3MPd_MavGvk957W+=AYJHRTHp5OjFC6Ktw@mail.gmail.com' \
    --to=laurentfdumont@gmail.com \
    --cc=greg.orawe@enea.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).