From: Laurent Dumont <laurentfdumont@gmail.com>
To: "Greg O'Rawe" <greg.orawe@enea.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf driver
Date: Wed, 4 Dec 2019 19:30:12 -0500 [thread overview]
Message-ID: <CAOAKi8y6kcn64cq9Zw8v+tkqkQpPjizcDNVrNYM=e=ph5zQf0w@mail.gmail.com> (raw)
In-Reply-To: <VI1PR03MB3677C1E2BEA8A89767A143DEED420@VI1PR03MB3677.eurprd03.prod.outlook.com>
These look okay to me. Sorry, I don't think I saw the same behavior :(
On Tue, Dec 3, 2019 at 5:27 AM Greg O'Rawe <greg.orawe@enea.com> wrote:
> Hi,
>
>
>
> Thanks for the reply.
>
>
>
> Here are the settings on the hypervisor – the link-state is “auto”:
>
>
>
> ip link show ens1f1
>
> 172: ens1f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
> ovs-system state UP mode DEFAULT group default qlen 1000
>
> link/ether 90:e2:ba:49:b2:29 brd ff:ff:ff:ff:ff:ff
>
> vf 0 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 1 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 2 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 3 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 4 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 5 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 6 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 7 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 8 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 9 MAC 00:00:00:00:00:00, spoof checking off, link-state auto, trust
> on, query_rss off
>
> vf 10 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
> vf 11 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
> vf 12 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
> vf 13 MAC 00:00:00:00:00:00, spoof checking off, link-state auto,
> trust on, query_rss off
>
> vf 14 MAC fa:16:3e:db:90:5a, vlan 412, spoof checking on, link-state
> auto, trust on, query_rss off
>
> vf 15 MAC fa:16:3e:bd:72:64, vlan 411, spoof checking on, link-state
> auto, trust on, query_rss off
>
>
>
> Thanks
>
> Greg
>
>
>
>
>
> *From:* Laurent Dumont <laurentfdumont@gmail.com>
> *Sent:* 30 November 2019 15:55
> *To:* Greg O'Rawe <greg.orawe@enea.com>
> *Cc:* users@dpdk.org
> *Subject:* Re: [dpdk-users] Failover not working on X520 NIC with ixgbevf
> driver
>
>
>
> Can you show the VF settings on the hypervisor? "ip link show
> $SRIOV_INTERFACE_NAME"?
>
>
>
> We saw similar issue with X710 where the physical state wasn't properly
> passed from the actual PF to the VM VF. That meant that failover could not
> happen since the VM thought the link was still active. We had to change the
> "link-state" parameter on the two VF used by the VM.
>
>
>
> That said, it was without VPP but a VM with DPDK enabled.
>
>
>
> On Thu, Nov 28, 2019 at 6:52 PM Greg O'Rawe <greg.orawe@enea.com> wrote:
>
> I have the following setup:
>
> * Virtual environment with Openstack with Intel X520 NIC
>
> * Hypervisor using ixgbe driver
>
> * Virtual machine using ixgbevf driver (version 4.6.1) on Red Hat
> Linux 7.6 running VPP and DPDK 17.11.4
>
> * VM interfaces are bonded in active-standby mode on ingress and
> egress
> In normal state everything is fine, the bond interfaces are operational.
> However when one of the physical interfaces on the hypervisor is brought
> down then failover to the standby does not work.
>
> The second interface in each bond does become primary but original primary
> is still reported as UP by VPP. The device stats reported by VPP change to
> around maximum values and traffic no longer works through the bond
> interfaces:
>
> Name Idx Link Hardware
> BondEthernet0 5 up Slave-Idx: 1 2
> Ethernet address fa:16:3e:20:2c:ae
> Ethernet Bonding
> carrier up full duplex speed 1000 mtu 1500
> Mode 1
> rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
> cpu socket 0
>
> tx frames ok 8589934243
> tx bytes ok 137438924646
> rx frames ok 8589849574
> rx bytes ok 137433171720
> extended stats:
> rx good packets 8589849574
> tx good packets 8589934243
> rx good bytes 137433171720
> tx good bytes 137438924646
>
> BondEthernet1 6 up Slave-Idx: 3 4
> Ethernet address fa:16:3e:f2:3c:af
> Ethernet Bonding
> carrier up full duplex speed 1000 mtu 1500
> Mode 1
> rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
> cpu socket 0
>
> tx frames ok 8589934273
> tx bytes ok 137438926918
> rx frames ok 8589849579
> rx bytes ok 137433172132
> extended stats:
> rx good packets 8589849579
> tx good packets 8589934273
> rx good bytes 137433172132
> tx good bytes 137438926918
>
> device_0/6/0 1 slave device_0/6/0
> Ethernet address fa:16:3e:20:2c:ae
> Intel 82599 VF
> carrier up full duplex speed 1000 mtu 1500
> Slave UP
> Slave State StandBy
> rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
> cpu socket 0
>
> tx frames ok 4294966950
> tx bytes ok 68719448136
> rx frames ok 4294882284
> rx bytes ok 68713695344
>
> device_0/7/0 2 slave device_0/7/0
> Ethernet address fa:16:3e:20:2c:ae
> Intel 82599 VF
> carrier up full duplex speed 1000 mtu 1500
> Slave UP
> Slave State Primary
> rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
> cpu socket 0
>
> tx frames ok 4294967293
> tx bytes ok 68719476510
> rx frames ok 4294967290
> rx bytes ok 68719476376
>
> device_0/8/0 3 slave device_0/8/0
> Ethernet address fa:16:3e:f2:3c:af
> Intel 82599 VF
> carrier up full duplex speed 1000 mtu 1500
> Slave UP
> Slave State StandBy
> rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
> cpu socket 0
>
> tx frames ok 4294966980
> tx bytes ok 68719450408
> rx frames ok 4294882289
> rx bytes ok 68713695756
>
> device_0/9/0 4 slave device_0/9/0
> Ethernet address fa:16:3e:f2:3c:af
> Intel 82599 VF
> carrier up full duplex speed 1000 mtu 1500
> Slave UP
> Slave State Primary
> rx queues 1, rx desc 1024, tx queues 1, tx desc 4096
> cpu socket 0
>
> tx frames ok 4294967293
> tx bytes ok 68719476510
> rx frames ok 4294967290
> rx bytes ok 68719476376
>
> There are no specific errors reported in the /var/log/messages files on
> either the VM or the hypervisor machines.
>
> Any ideas on this issue? Is there a configuration problem, or possibly a
> change in a later DPDK version which might be relevant?
>
> Thanks
>
> Greg O'Rawe
>
>
>
>
> This message, including attachments, is CONFIDENTIAL. It may also be
> privileged or otherwise protected by law. If you received this email by
> mistake please let us know by reply and then delete it from your system;
> you should not copy it or disclose its contents to anyone. All messages
> sent to and from Enea may be monitored to ensure compliance with internal
> policies and to protect our business. Emails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, a mended, lost or
> destroyed, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of email transmission. Anyone who communicates with
> us by email accepts these risks.
>
>
> This message, including attachments, is CONFIDENTIAL. It may also be
> privileged or otherwise protected by law. If you received this email by
> mistake please let us know by reply and then delete it from your system;
> you should not copy it or disclose its contents to anyone. All messages
> sent to and from Enea may be monitored to ensure compliance with internal
> policies and to protect our business. Emails are not secure and cannot be
> guaranteed to be error free as they can be intercepted, a mended, lost or
> destroyed, or contain viruses. The sender therefore does not accept
> liability for any errors or omissions in the contents of this message,
> which arise as a result of email transmission. Anyone who communicates with
> us by email accepts these risks.
>
next prev parent reply other threads:[~2019-12-05 0:30 UTC|newest]
Thread overview: 5+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-27 11:37 Greg O'Rawe
2019-11-30 15:54 ` Laurent Dumont
2019-12-03 10:27 ` Greg O'Rawe
2019-12-05 0:30 ` Laurent Dumont [this message]
2019-12-05 10:12 ` Greg O'Rawe
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAOAKi8y6kcn64cq9Zw8v+tkqkQpPjizcDNVrNYM=e=ph5zQf0w@mail.gmail.com' \
--to=laurentfdumont@gmail.com \
--cc=greg.orawe@enea.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).