DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] Intel XXV710 SR-IOV packet loss
@ 2019-08-20 15:36 Miroslav Kováč
  2019-08-23 11:49 ` [dpdk-users] [dpdk-dev] " Ferruh Yigit
  0 siblings, 1 reply; 2+ messages in thread
From: Miroslav Kováč @ 2019-08-20 15:36 UTC (permalink / raw)
  To: dev, users; +Cc: Juraj Linkeš, Miroslav Mikluš

Hello,


We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov to sort packets based on vlan in between the VFs. We are using trex on one machine to generate packets and multiple VPPs (each in docker container, using one VF) on another one. Trex machine contains the exact same hardware.


Each VF contains one vlan with spoof checking off and trust on and specific MAC address. For example ->


vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, trust on



We are generating packets with VF destination MACs with the corresponding VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk stats on the trex machine show that 35 million were in fact sent out:


##### DPDK Statistics port0 #####
{
    "tx_good_bytes": 2142835740,
    "tx_good_packets": 35713929,
    "tx_size_64_packets": 35713929,
    "tx_unicast_packets": 35713929
}


rate= '96%'; pktSize=       64; frameLoss%=51.31%; bytesReceived/s=    1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   18323827; bytesReceived=    1112966528; targetDuration=1.0


However VPP shows only 33 million rx-packets:

VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0
rx packets               5718196
rx bytes               343091760
rx-miss                  5572089

VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0
rx packets               5831396
rx bytes               349883760
rx-miss                  5459089

VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0
rx packets               5840512
rx bytes               350430720
rx-miss                  5449466

Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.



Even when I check VFs stats I see only 33 million to come (out of which 9.9 million are rx-missed):


root@protonet:/home/protonet# for f in $(ls /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat $f)"; done | grep -v ' 0$'

/sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
/sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
/sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978



When increasing the number of VFs the number of rx-packets on VPP is actually decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million packets, but when I use 8 VFs all the sudden it drops to 16 million packets (no rx-miss any more). The same goes with trunk mode:


VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0
rx packets               1959110
rx bytes               117546600


VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0
rx packets               1959181
rx bytes               117550860

VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0
rx packets               1956242
rx bytes               117374520
.
.
.
Approximately the same amount of packets for each VPP instance which is 2 million packets * 8 = 16 million packets out of 35 million sent. Almost 20 million are gone



We are using vfio-pci driver.


The strange thing is that when I use only PF, no sr-iov VFs are on and I try the same vpp setup I can see all 35 million packets to come across.


This leads us to believe that there could be something wrong the sr-iov on XXV710 but we don't know how to debug this any further - The packets seem to be lost somewhere in the NIC when using sr-iov and we don't know of any dpdk or linux tool that could help us with locating the lost packets.


Regards,

Miroslav Kovac

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-users] [dpdk-dev] Intel XXV710 SR-IOV packet loss
  2019-08-20 15:36 [dpdk-users] Intel XXV710 SR-IOV packet loss Miroslav Kováč
@ 2019-08-23 11:49 ` " Ferruh Yigit
  0 siblings, 0 replies; 2+ messages in thread
From: Ferruh Yigit @ 2019-08-23 11:49 UTC (permalink / raw)
  To: Miroslav Kováč, dev, users
  Cc: Juraj Linkeš, Miroslav Mikluš, Beilei Xing, Qi Zhang

On 8/20/2019 4:36 PM, Miroslav Kováč wrote:
> Hello,
> 
> 
> We are trying a setup with intel 25 GB card XXV710 and sr-iov. We need sr-iov to sort packets based on vlan in between the VFs. We are using trex on one machine to generate packets and multiple VPPs (each in docker container, using one VF) on another one. Trex machine contains the exact same hardware.
> 
> 
> Each VF contains one vlan with spoof checking off and trust on and specific MAC address. For example ->
> 
> 
> vf 0 MAC ba:dc:0f:fe:ed:00, vlan 1537, spoof checking off, link-state auto, trust on
> 
> 
> 
> We are generating packets with VF destination MACs with the corresponding VLAN. When sending packets to 3 VFs trex shows 35 million tx-packets and Dpdk stats on the trex machine show that 35 million were in fact sent out:
> 
> 
> ##### DPDK Statistics port0 #####
> {
>     "tx_good_bytes": 2142835740,
>     "tx_good_packets": 35713929,
>     "tx_size_64_packets": 35713929,
>     "tx_unicast_packets": 35713929
> }
> 
> 
> rate= '96%'; pktSize=       64; frameLoss%=51.31%; bytesReceived/s=    1112966528.00; totalReceived=   17390102; totalSent=   35713929; frameLoss=   18323827; bytesReceived=    1112966528; targetDuration=1.0
> 
> 
> However VPP shows only 33 million rx-packets:
> 
> VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0
> rx packets               5718196
> rx bytes               343091760
> rx-miss                  5572089
> 
> VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0
> rx packets               5831396
> rx bytes               349883760
> rx-miss                  5459089
> 
> VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0
> rx packets               5840512
> rx bytes               350430720
> rx-miss                  5449466
> 
> Sum of rx packets and rx-miss is 33,870,748. About 2 million is missing.
> 
> 
> 
> Even when I check VFs stats I see only 33 million to come (out of which 9.9 million are rx-missed):
> 
> 
> root@protonet:/home/protonet# for f in $(ls /sys/class/net/enp23s0f1/device/sriov/*/stats/rx_packets); do echo "$f: $(cat $f)"; done | grep -v ' 0$'
> 
> /sys/class/net/enp23s0f1/device/sriov/0/stats/rx_packets: 11290290
> /sys/class/net/enp23s0f1/device/sriov/1/stats/rx_packets: 11290485
> /sys/class/net/enp23s0f1/device/sriov/2/stats/rx_packets: 11289978
> 
> 
> 
> When increasing the number of VFs the number of rx-packets on VPP is actually decreasing. Up to 6 or 7 VFs I still receive somewhere around 28-33 million packets, but when I use 8 VFs all the sudden it drops to 16 million packets (no rx-miss any more). The same goes with trunk mode:
> 
> 
> VirtualFunctionEthernet17/a/0     2      up          9000/0/0/0
> rx packets               1959110
> rx bytes               117546600
> 
> 
> VirtualFunctionEthernet17/a/1     2      up          9000/0/0/0
> rx packets               1959181
> rx bytes               117550860
> 
> VirtualFunctionEthernet17/a/2     2      up          9000/0/0/0
> rx packets               1956242
> rx bytes               117374520
> .
> .
> .
> Approximately the same amount of packets for each VPP instance which is 2 million packets * 8 = 16 million packets out of 35 million sent. Almost 20 million are gone
> 
> 
> 
> We are using vfio-pci driver.
> 
> 
> The strange thing is that when I use only PF, no sr-iov VFs are on and I try the same vpp setup I can see all 35 million packets to come across.
> 
> 
> This leads us to believe that there could be something wrong the sr-iov on XXV710 but we don't know how to debug this any further - The packets seem to be lost somewhere in the NIC when using sr-iov and we don't know of any dpdk or linux tool that could help us with locating the lost packets.
> 
> 
> Regards,
> 
> Miroslav Kovac
> 

+i40e maintainers.



^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-20 15:36 [dpdk-users] Intel XXV710 SR-IOV packet loss Miroslav Kováč
2019-08-23 11:49 ` [dpdk-users] [dpdk-dev] " Ferruh Yigit

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox