From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D21C9A0A0E; Wed, 12 May 2021 05:25:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6D65B4003F; Wed, 12 May 2021 05:25:25 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 6D9384003E for ; Wed, 12 May 2021 05:25:24 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id 30C73A0C3F; Wed, 12 May 2021 05:25:24 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Wed, 12 May 2021 03:25:23 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: vhost/virtio X-Bugzilla-Version: unspecified X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: weix.ling@intel.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 702] [dpdk-21.05] perf_vm2vm_virtio_net_perf/test_vm2vm_split_ring_iperf_with_tso: vm can't forward big packets X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" https://bugs.dpdk.org/show_bug.cgi?id=3D702 Bug ID: 702 Summary: [dpdk-21.05] perf_vm2vm_virtio_net_perf/test_vm2vm_split_ring_iperf _with_tso: vm can't forward big packets Product: DPDK Version: unspecified Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: vhost/virtio Assignee: dev@dpdk.org Reporter: weix.ling@intel.com Target Milestone: --- Environment DPDK version:=20 21.05-rc2:47a0c2e11712fc5286d6a197d549817ae8f8f50e Other software versions: N/A OS: Ubuntu 20.04.1 LTS/Linux 5.11.6-051106-generic Compiler: gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz NIC hardware: Intel Corporation Ethernet Controller XL710 for 40GbE QSFP+ (= rev 01) NIC driver & firmware: i40e-5.11.6-051106-generic/8.30 0x8000a4ae 1.2926.0 Test Setup Steps to reproduce List the steps to reproduce the issue. # 1. Bind NIC to DPDK dpdk-devbind.py --force --bind=3Dvfio-pci 0000:af:00.0 0000:af:00.1 # 2. Build DPDK CC=3Dgcc meson -Denable_kmods=3DTrue -Dlibdir=3Dlib --default-library=3Dst= atic x86_64-native-linuxapp-gcc ninja -C x86_64-native-linuxapp-gcc # 3.Start vhost testpmd x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,= 53,54,55,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,10= 4,105,106,107,108,109,110,111 -n 4 --file-prefix=3Dvhost_49626_20210512103940 --no-pci --vdev 'net_vhost0,iface=3D/root/dpdk/vhost-net0,queues=3D1' --vdev 'net_vhost1,iface=3D/root/dpdk/vhost-net1,queues=3D1' -- -i --nb-cores=3D2 --txd=3D1024 --rxd=3D1024 start #4. Start VM0 taskset -c 46,47,48,49,50,51,52,53 /home/QEMU/qemu-4.2.1/bin/qemu-system-x8= 6_64 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.240.183.220:6000-:22 -device e1000,netdev=3Dnttsip1 -chardev socket,id=3Dchar0,path=3D/root/dpdk/vhost-= net0 -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,mrg_rxbuf=3Doff,csu= m=3Don,guest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don -cpu host -smp 8 -m 8192 -object memory-backend-file,id=3Dmem,size=3D8192M,mem-path=3D/mnt/huge,share=3Don -= numa node,memdev=3Dmem -mem-prealloc -chardev socket,path=3D/tmp/vm0_qga0.sock,server,nowait,id=3Dvm0_qga0 -device virtio= -serial -device virtserialport,chardev=3Dvm0_qga0,name=3Dorg.qemu.guest_agent.0 -vn= c :4 -drive file=3D/home/image/ubuntu2004.img # 5.Start VM1 taskset -c 102,103,104,105,106,107,108,109 /home/QEMU/qemu-4.2.1/bin/qemu-system-x86_64 -name vm1 -enable-kvm -pidfile /tmp/.vm1.pid -daemonize -monitor unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=3Dnttsip1 -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.240.183.220:6001-:22 -chardev socket,id=3Dchar0,path=3D/root/dpdk/vhost-net1 -netdev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:02,mrg_rxbuf=3Doff,csu= m=3Don,guest_csum=3Don,host_tso4=3Don,guest_tso4=3Don,guest_ecn=3Don -cpu host -smp 8 -m 8192 -object memory-backend-file,id=3Dmem,size=3D8192M,mem-path=3D/mnt/huge,share=3Don -= numa node,memdev=3Dmem -mem-prealloc -chardev socket,path=3D/tmp/vm1_qga0.sock,server,nowait,id=3Dvm1_qga0 -device virtio= -serial -device virtserialport,chardev=3Dvm1_qga0,name=3Dorg.qemu.guest_agent.0 -vn= c :5 -drive file=3D/home/image/ubuntu2004_2.img # 6.Config ipaddress in VM0 and VM1 ifconfig ens4 up ifconfig ens4 1.1.1.2 ifconfig ens4 up ifconfig ens4 1.1.1.3 # 8.Send ARP in VM0 and VM1 arp -s 1.1.1.3 52:54:00:00:00:02 arp -s 1.1.1.2 52:54:00:00:00:01 # 9. Send ICMP packet from VM0 to VM1 ping 1.1.1.3 -c 4 # 10. Use iperf tools to test big packet in VM0 and VM1 to get throughput iperf -s -i 1 //in VM0 iperf -c 1.1.1.2 -i 1 -t 60 //in VM1 Show the output from the previous commands. # Put output in a noformat block like this. root@vmubuntu2004:~# ping 1.1.1.3 -c 4 PING 1.1.1.3 (1.1.1.3) 56(84) bytes of data. 64 bytes from 1.1.1.3: icmp_seq=3D1 ttl=3D64 time=3D0.049 ms 64 bytes from 1.1.1.3: icmp_seq=3D2 ttl=3D64 time=3D0.062 ms 64 bytes from 1.1.1.3: icmp_seq=3D3 ttl=3D64 time=3D0.061 ms 64 bytes from 1.1.1.3: icmp_seq=3D4 ttl=3D64 time=3D0.061 ms --- 1.1.1.3 ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 3061ms rtt min/avg/max/mdev =3D 0.049/0.058/0.062/0.005 ms root@vmubuntu2004:~# root@vmubuntu2004:~# iperf -s -i 1 ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 128 KByte (default) ------------------------------------------------------------ root@vmubuntu2004:~/dpdk# iperf -c 1.1.1.2 -i 1 -t 60 Expected Result Explain what is the expected result in text or as an example output: root@vmubuntu2004:~# iperf -s -i 1 ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 10.240.183.217 port 5001 connected with 10.240.183.213 port 408= 92 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 1.0 sec 112 MBytes 941 Mbits/sec [ 4] 1.0- 2.0 sec 112 MBytes 941 Mbits/sec [ 4] 2.0- 3.0 sec 112 MBytes 941 Mbits/sec [ 4] 3.0- 4.0 sec 112 MBytes 942 Mbits/sec [ 4] 4.0- 5.0 sec 112 MBytes 941 Mbits/sec [ 4] 5.0- 6.0 sec 112 MBytes 942 Mbits/sec root@vmubuntu2004:~# iperf -c 1.1.1.2 -i 1 -t 60 ------------------------------------------------------------ Client connecting to 10.240.183.217, TCP port 5001 TCP window size: 85.0 KByte (default) ------------------------------------------------------------ [ 3] local 10.240.183.213 port 40892 connected with 10.240.183.217 port 50= 01 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 114 MBytes 953 Mbits/sec [ 3] 1.0- 2.0 sec 113 MBytes 947 Mbits/sec [ 3] 2.0- 3.0 sec 112 MBytes 938 Mbits/sec [ 3] 3.0- 4.0 sec 112 MBytes 943 Mbits/sec Regression Is this issue a regression: (Y/N) Y Version the regression was introduced: Specify git id if known. commit ca7036b4af3a82d258cca914e71171434b3d0320 Author: David Marchand Date: Mon May 3 18:43:44 2021 +0200 vhost: fix offload flags in Rx path The vhost library currently configures Tx offloading (PKT_TX_*) on any packet received from a guest virtio device which asks for some offloading. This is problematic, as Tx offloading is something that the application must ask for: the application needs to configure devices to support every used offloads (ip, tcp checksumming, tso..), and the various l2/l3/l4 lengths must be set following any processing that happened in the application itself. On the other hand, the received packets are not marked wrt current packet l3/l4 checksumming info. Copy virtio rx processing to fix those offload flags with some differences: accept VIRTIO_NET_HDR_GSO_ECN and VIRTIO_NET_HDR_GSO_UDP, ignore anything but the VIRTIO_NET_HDR_F_NEEDS_CSUM flag (to comply with the virtio spec), Some applications might rely on the current behavior, so it is left untouched by default. A new RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS flag is added to enable the new behavior. The vhost example has been updated for the new behavior: TSO is applied to any packet marked LRO. Fixes: 859b480d5afd ("vhost: add guest offload setting") Cc: stable@dpdk.org Signed-off-by: David Marchand Reviewed-by: Maxime Coquelin --=20 You are receiving this mail because: You are the assignee for the bug.=