DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ferruh Yigit <ferruh.yigit@intel.com>
To: "Miroslav Kováč" <miroslav.kovac@pantheon.tech>,
	"dev@dpdk.org" <dev@dpdk.org>, "users@dpdk.org" <users@dpdk.org>
Cc: "Juraj Linkeš" <juraj.linkes@pantheon.tech>,
	"Miroslav Mikluš" <miroslav.miklus@pantheon.tech>,
	"Beilei Xing" <beilei.xing@intel.com>,
	"Qi Zhang" <qi.z.zhang@intel.com>
Subject: Re: [dpdk-dev] Intel XXV710 SR-IOV packet loss
Date: Fri, 23 Aug 2019 12:49:36 +0100	[thread overview]
Message-ID: <c234f2a0-15f9-f683-4852-89e0b3749a67@intel.com> (raw)
In-Reply-To: <51bf2af189834b1d91717a3e8e648885@pantheon.tech>

On 8/20/2019 4:36 PM, Miroslav Kováč wrote:
> Hello,
> 
> 
> We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov to sort packets based on vlan in between the VFs. We are using trex on one machine to generate packets and multiple VPPs (each in docker container, using one VF) on another one. Trex machine contains the exact same hardware.
> 
> 
> Each VF contains one vlan with spoof checking off and trust on and specific MAC address. For example ->
> 
> 
> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, trust on
> 
> 
> 
> We are generating packets with VF destination MACs with the corresponding VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk stats on the trex machine show that 35 million were in fact sent out:
> 
> 
> ##### DPDK Statistics port0 #####
> {
>     "tx_good_bytes": 2142835740,
>     "tx_good_packets": 35713929,
>     "tx_size_64_packets": 35713929,
>     "tx_unicast_packets": 35713929
> }
> 
> 
> rate= '96%'; pktSize=       64; frameLoss%=51.31%; bytesReceived/s=    1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   18323827; bytesReceived=    1112966528; targetDuration=1.0
> 
> 
> However VPP shows only 33 million rx-packets:
> 
> VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0
> rx packets               5718196
> rx bytes               343091760
> rx-miss                  5572089
> 
> VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0
> rx packets               5831396
> rx bytes               349883760
> rx-miss                  5459089
> 
> VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0
> rx packets               5840512
> rx bytes               350430720
> rx-miss                  5449466
> 
> Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.
> 
> 
> 
> Even when I check VFs stats I see only 33 million to come (out of which 9.9 million are rx-missed):
> 
> 
> root@protonet:/home/protonet# for f in $(ls /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat $f)"; done | grep -v ' 0$'
> 
> /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
> /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
> /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978
> 
> 
> 
> When increasing the number of VFs the number of rx-packets on VPP is actually decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million packets, but when I use 8 VFs all the sudden it drops to 16 million packets (no rx-miss any more). The same goes with trunk mode:
> 
> 
> VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0
> rx packets               1959110
> rx bytes               117546600
> 
> 
> VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0
> rx packets               1959181
> rx bytes               117550860
> 
> VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0
> rx packets               1956242
> rx bytes               117374520
> .
> .
> .
> Approximately the same amount of packets for each VPP instance which is 2 million packets * 8 = 16 million packets out of 35 million sent. Almost 20 million are gone
> 
> 
> 
> We are using vfio-pci driver.
> 
> 
> The strange thing is that when I use only PF, no sr-iov VFs are on and I try the same vpp setup I can see all 35 million packets to come across.
> 
> 
> This leads us to believe that there could be something wrong the sr-iov on XXV710 but we don't know how to debug this any further - The packets seem to be lost somewhere in the NIC when using sr-iov and we don't know of any dpdk or linux tool that could help us with locating the lost packets.
> 
> 
> Regards,
> 
> Miroslav Kovac
> 

+i40e maintainers.



      reply	other threads:[~2019-08-23 11:49 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-20 15:36 Miroslav Kováč
2019-08-23 11:49 ` Ferruh Yigit [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=c234f2a0-15f9-f683-4852-89e0b3749a67@intel.com \
    --to=ferruh.yigit@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=juraj.linkes@pantheon.tech \
    --cc=miroslav.kovac@pantheon.tech \
    --cc=miroslav.miklus@pantheon.tech \
    --cc=qi.z.zhang@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).