DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 702] [dpdk-21.05] perf_vm2vm_virtio_net_perf/test_vm2vm_split_ring_iperf_with_tso: vm can't forward big packets
@ 2021-05-12  3:25 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2021-05-12  3:25 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=702

            Bug ID: 702
           Summary: [dpdk-21.05]
                    perf_vm2vm_virtio_net_perf/test_vm2vm_split_ring_iperf
                    _with_tso: vm can't forward big packets
           Product: DPDK
           Version: unspecified
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: weix.ling@intel.com
  Target Milestone: ---

Environment

DPDK version: 
 21.05-rc2:47a0c2e11712fc5286d6a197d549817ae8f8f50e
Other software versions: N/A
OS: Ubuntu 20.04.1 LTS/Linux 5.11.6-051106-generic
Compiler: gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (rev
01)
NIC driver & firmware: i40e-5.11.6-051106-generic/8.30 0x8000a4ae 1.2926.0


Test Setup
Steps to reproduce
List the steps to reproduce the issue.

# 1. Bind NIC to DPDK
dpdk-devbind.py --force --bind=vfio-pci 0000:af:00.0 0000:af:00.1

# 2. Build DPDK
CC=gcc meson -Denable_kmods=True -Dlibdir=lib  --default-library=static
x86_64-native-linuxapp-gcc
ninja -C x86_64-native-linuxapp-gcc

# 3.Start vhost testpmd
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l
28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111
-n 4   --file-prefix=vhost_49626_20210512103940 --no-pci --vdev
'net_vhost0,iface=/root/dpdk/vhost-net0,queues=1' --vdev
'net_vhost1,iface=/root/dpdk/vhost-net1,queues=1'  -- -i --nb-cores=2
--txd=1024 --rxd=1024

start

#4. Start VM0
taskset -c 46,47,48,49,50,51,52,53 /home/QEMU/qemu-4.2.1/bin/qemu-system-x86_64
 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
unix:/tmp/vm0_monitor.sock,server,nowait -netdev
user,id=nttsip1,hostfwd=tcp:10.240.183.220:6000-:22 -device
e1000,netdev=nttsip1  -chardev socket,id=char0,path=/root/dpdk/vhost-net0
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on
-cpu host -smp 8 -m 8192 -object
memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial
-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :4
-drive file=/home/image/ubuntu2004.img

# 5.Start VM1
taskset -c 102,103,104,105,106,107,108,109
/home/QEMU/qemu-4.2.1/bin/qemu-system-x86_64  -name vm1 -enable-kvm -pidfile
/tmp/.vm1.pid -daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait
-device e1000,netdev=nttsip1  -netdev
user,id=nttsip1,hostfwd=tcp:10.240.183.220:6001-:22 -chardev
socket,id=char0,path=/root/dpdk/vhost-net1 -netdev
type=vhost-user,id=netdev0,chardev=char0,vhostforce -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,mrg_rxbuf=off,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on
-cpu host -smp 8 -m 8192 -object
memory-backend-file,id=mem,size=8192M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial
-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -vnc :5
-drive file=/home/image/ubuntu2004_2.img

# 6.Config ipaddress in VM0 and VM1
ifconfig ens4 up
ifconfig ens4 1.1.1.2

ifconfig ens4 up
ifconfig ens4 1.1.1.3

# 8.Send ARP in VM0 and VM1
arp -s 1.1.1.3 52:54:00:00:00:02

arp -s 1.1.1.2 52:54:00:00:00:01

# 9. Send ICMP packet from VM0 to VM1
ping 1.1.1.3 -c 4

# 10. Use iperf tools to test big packet in VM0 and VM1 to get throughput

 iperf -s -i 1    //in VM0

 iperf -c 1.1.1.2 -i 1 -t 60   //in VM1


Show the output from the previous commands.

# Put output in a noformat block like this.
root@vmubuntu2004:~# ping 1.1.1.3 -c 4
PING 1.1.1.3 (1.1.1.3) 56(84) bytes of data.
64 bytes from 1.1.1.3: icmp_seq=1 ttl=64 time=0.049 ms
64 bytes from 1.1.1.3: icmp_seq=2 ttl=64 time=0.062 ms
64 bytes from 1.1.1.3: icmp_seq=3 ttl=64 time=0.061 ms
64 bytes from 1.1.1.3: icmp_seq=4 ttl=64 time=0.061 ms
--- 1.1.1.3 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3061ms
rtt min/avg/max/mdev = 0.049/0.058/0.062/0.005 ms
root@vmubuntu2004:~#

root@vmubuntu2004:~# iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size:  128 KByte (default)
------------------------------------------------------------

root@vmubuntu2004:~/dpdk# iperf -c 1.1.1.2 -i 1 -t 60



Expected Result
Explain what is the expected result in text or as an example output:

root@vmubuntu2004:~# iperf -s -i 1
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.240.183.217 port 5001 connected with 10.240.183.213 port 40892
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0- 1.0 sec   112 MBytes   941 Mbits/sec
[  4]  1.0- 2.0 sec   112 MBytes   941 Mbits/sec
[  4]  2.0- 3.0 sec   112 MBytes   941 Mbits/sec
[  4]  3.0- 4.0 sec   112 MBytes   942 Mbits/sec
[  4]  4.0- 5.0 sec   112 MBytes   941 Mbits/sec
[  4]  5.0- 6.0 sec   112 MBytes   942 Mbits/sec

root@vmubuntu2004:~# iperf -c 1.1.1.2 -i 1 -t 60
------------------------------------------------------------
Client connecting to 10.240.183.217, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.240.183.213 port 40892 connected with 10.240.183.217 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0- 1.0 sec   114 MBytes   953 Mbits/sec
[  3]  1.0- 2.0 sec   113 MBytes   947 Mbits/sec
[  3]  2.0- 3.0 sec   112 MBytes   938 Mbits/sec
[  3]  3.0- 4.0 sec   112 MBytes   943 Mbits/sec


Regression
Is this issue a regression: (Y/N) Y

Version the regression was introduced: Specify git id if known.

commit ca7036b4af3a82d258cca914e71171434b3d0320
Author: David Marchand <david.marchand@redhat.com>
Date: Mon May 3 18:43:44 2021 +0200

vhost: fix offload flags in Rx path

The vhost library currently configures Tx offloading (PKT_TX_*) on any
packet received from a guest virtio device which asks for some offloading.

This is problematic, as Tx offloading is something that the application
must ask for: the application needs to configure devices
to support every used offloads (ip, tcp checksumming, tso..), and the
various l2/l3/l4 lengths must be set following any processing that
happened in the application itself.

On the other hand, the received packets are not marked wrt current
packet l3/l4 checksumming info.

Copy virtio rx processing to fix those offload flags with some
differences:

accept VIRTIO_NET_HDR_GSO_ECN and VIRTIO_NET_HDR_GSO_UDP,
ignore anything but the VIRTIO_NET_HDR_F_NEEDS_CSUM flag (to comply with
the virtio spec),
Some applications might rely on the current behavior, so it is left
untouched by default.
A new RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS flag is added to enable the
new behavior.

The vhost example has been updated for the new behavior: TSO is applied to
any packet marked LRO.

Fixes: 859b480d5afd ("vhost: add guest offload setting")
Cc: stable@dpdk.org

Signed-off-by: David Marchand <david.marchand@redhat.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-05-12  3:25 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-12  3:25 [dpdk-dev] [Bug 702] [dpdk-21.05] perf_vm2vm_virtio_net_perf/test_vm2vm_split_ring_iperf_with_tso: vm can't forward big packets bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).