From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1F88CA0C4D; Mon, 9 Aug 2021 03:48:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B4E424069D; Mon, 9 Aug 2021 03:48:27 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id 0CA274068A for ; Mon, 9 Aug 2021 03:48:27 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id D1334A0C4F; Mon, 9 Aug 2021 03:48:26 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Date: Mon, 09 Aug 2021 01:48:26 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: vhost/virtio X-Bugzilla-Version: 21.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: weix.ling@intel.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 Subject: [dpdk-dev] [Bug 782] [dpdk-20.11] packed ring loopback large pkts test can't fwd packets correctly after vhost relaunching X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" https://bugs.dpdk.org/show_bug.cgi?id=3D782 Bug ID: 782 Summary: [dpdk-20.11] packed ring loopback large pkts test can't fwd packets correctly after vhost relaunching Product: DPDK Version: 21.11 Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: vhost/virtio Assignee: dev@dpdk.org Reporter: weix.ling@intel.com Target Milestone: --- Environment DPDK version: DPDK-20.11.0 b1d36cf828771e28eb0130b59dcf606c2a0bc94d Other software versions: N/A. OS: Ubuntu 20.04.2 LTS/Linux 5.4.0-42-generic Compiler: gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04) Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz. NIC hardware: FVL-40g. NIC firmware & driver:=20 driver: i40e version: 5.11.16-051116-generic firmware-version: 8.30 0x8000a4ae 1.2926.0 Test Setup Steps to reproduce List the steps to reproduce the issue. 1. launch vhost: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 --no-pci --= vdev 'eth_vhost0,iface=3Dvhost-net0,queues=3D1,client=3D1' -- -i --nb-cores=3D1 = --rxq=3D1 --txq=3D1 --txd=3D1024 --rxd=3D1024 set fwd csum start 2. launch virtio-user and send large pkts, check loopback performance: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4=20 --file-prefix=3Dtestpmd0 --no-pci=20 --vdev=3Dnet_virtio_user0,mac=3D00:11:22:33:44:10,path=3D./vhost-net0,queue= s=3D1,mrg_rxbuf=3D1,in_order=3D0,packed_vq=3D1,server=3D1 -- -i --nb-cores=3D1 --rxq=3D1 --txq=3D1 --txd=3D1024 --rxd=3D1024 set fwd csum set txpkts 2000,2000,2000,2000 start tx_first 32 show port stats all #get 0.7mpps testpmd> show port stats all ######################## NIC statistics for port 0 #####################= ### RX-packets: 7131896 RX-missed: 0 RX-bytes: 57055200000 RX-errors: 0 RX-nombuf: 0 TX-packets: 7132472 TX-errors: 0 TX-bytes: 57059776000 Throughput (since last show) Rx-pps: 772402 Rx-bps: 49434132192 Tx-pps: 772402 Tx-bps: 49433768720 #########################################################################= ### 3.quit vhost and relaunch vhost with same cmd: ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32-33 -n 4 --no-pci --= vdev 'eth_vhost0,iface=3Dvhost-net0,queues=3D1,client=3D1' -- -i --nb-cores=3D1 = --rxq=3D1 --txq=3D1 --txd=3D1024 --rxd=3D1024 set fwd csum start 4. stop virtio-user and re-send large pkts, check loopback performance: stop set fwd csum set txpkts 2000,2000,2000,2000 start tx_first 32 show port stats all #get 1.9 mpps, suspect pkts payload not correct ######################## NIC statistics for port 0 #####################= ### RX-packets: 25519650 RX-missed: 0 RX-bytes: 79521868024 RX-errors: 110 RX-nombuf: 0 TX-packets: 25520795 TX-errors: 0 TX-bytes: 79531070864 Throughput (since last show) Rx-pps: 1891019 Rx-bps: 30921946440 Tx-pps: 1891019 Tx-bps: 30921946440 #########################################################################= ### Expected Result Explain what is the expected result in text or as an example output: After vhost relaunch, loopback throughput is same with before vhost relaunc= h.=20 Regression Is this issue a regression: (Y) Version the regression was introduced: Specify git id if known. Bad commit: commit 9af79db20f4cf75 Author: Maxime Coquelin Date: Tue Jan 26 11:16:32 2021 +0100 net/virtio: make server mode blocking This patch makes the Vhost-user backend server mode blocking at init, waiting for the client connection. The goal is to make the driver more reliable, as without waiting for client connection, the Virtio driver has to assume the Vhost-user backend will support all the features it has advertized, which could lead to undefined behaviour. For example, without this patch, if the user enables packed ring Virtio feature but the backend does not support it, the ring initialized by the driver will not be compatible with the backend. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --=20 You are receiving this mail because: You are the assignee for the bug.=