From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F2BC428E5; Fri, 7 Apr 2023 11:14:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4425B40E03; Fri, 7 Apr 2023 11:14:58 +0200 (CEST) Received: from inbox.dpdk.org (inbox.dpdk.org [95.142.172.178]) by mails.dpdk.org (Postfix) with ESMTP id CBCDD40150 for ; Fri, 7 Apr 2023 11:14:56 +0200 (CEST) Received: by inbox.dpdk.org (Postfix, from userid 33) id BAEA4428E6; Fri, 7 Apr 2023 11:14:56 +0200 (CEST) From: bugzilla@dpdk.org To: dev@dpdk.org Subject: =?UTF-8?B?W0J1ZyAxMjEyXSBbZHBkay0yMS4xMS44XXB2cF9xZW11X211bHRp?= =?UTF-8?B?X3BhdGhzX3BvcnRfcmVzdGFydDpwZXJmX3B2cF9xZW11X3ZlY3Rvcl9yeF9t?= =?UTF-8?B?YWM6IHBlcmZvcm1hbmNlIGRyb3AgYWJvdXQgMjMuNSUgd2hlbiBzZW5kIHNt?= =?UTF-8?B?YWxsIHBhY2tldHM=?= Date: Fri, 07 Apr 2023 09:14:56 +0000 X-Bugzilla-Reason: AssignedTo X-Bugzilla-Type: new X-Bugzilla-Watch-Reason: None X-Bugzilla-Product: DPDK X-Bugzilla-Component: vhost/virtio X-Bugzilla-Version: 20.11 X-Bugzilla-Keywords: X-Bugzilla-Severity: normal X-Bugzilla-Who: weix.ling@intel.com X-Bugzilla-Status: UNCONFIRMED X-Bugzilla-Resolution: X-Bugzilla-Priority: Normal X-Bugzilla-Assigned-To: dev@dpdk.org X-Bugzilla-Target-Milestone: --- X-Bugzilla-Flags: X-Bugzilla-Changed-Fields: bug_id short_desc product version rep_platform op_sys bug_status bug_severity priority component assigned_to reporter target_milestone Message-ID: Content-Type: multipart/alternative; boundary=16808588960.47b2EC.626079 Content-Transfer-Encoding: 7bit X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org --16808588960.47b2EC.626079 Date: Fri, 7 Apr 2023 11:14:56 +0200 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All https://bugs.dpdk.org/show_bug.cgi?id=3D1212 Bug ID: 1212 Summary: [dpdk-21.11.8]pvp_qemu_multi_paths_port_restart:perf_p vp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets Product: DPDK Version: 20.11 Hardware: All OS: All Status: UNCONFIRMED Severity: normal Priority: Normal Component: vhost/virtio Assignee: dev@dpdk.org Reporter: weix.ling@intel.com Target Milestone: --- [Environment] DPDK version: Use make showversion or for a non-released version: git remot= e -v && git show-ref --heads 20.11.8-rc1 Other software versions: QEMU-7.0.0. OS: Ubuntu 22.04.1 LTS/Linux 5.15.45-051545-generic Compiler: gcc version 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04) Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz NIC hardware: Intel Ethernet Controller XL710 for 40GbE QSFP+ 1583 NIC firmware: i40e-2.22.18/9.20 0x8000d893 1.3353.0 [Test Setup] Steps to reproduce List the steps to reproduce the issue. 1.Bind 1 NIC port to vfio-pci dpdk-devbind.py --force --bind=3Dvfio-pci 0000:af:00.0 2.Start vhost-user: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 28,29,30 -n 4 -a 0000:af:00.= 0=20 --file-prefix=3Dvhost_2352949_20230407162534 --vdev 'net_vhost0,iface=3Dvhost-net,queues=3D1' -- -i --nb-cores=3D1 --txd=3D1024= --rxd=3D1024 testpmd>set fwd mac testpmd>start 3.Start VM0 with QEMU-7.0.0 taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-7.0.0/bin/qemu-system-x8= 6_64 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor unix:/tmp/vm0_monitor.sock,server,nowait -netdev user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.252.220:6000-:22 -device e1000,netdev=3Dnttsip1 -chardev socket,id=3Dchar0,path=3D./vhost-net -netd= ev type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce -device virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-modern=3Dtr= ue,mrg_rxbuf=3Doff,rx_queue_size=3D1024,tx_queue_size=3D1024 -cpu host -smp 8 -m 16384 -object memory-backend-file,id=3Dmem,size=3D16384M,mem-path=3D/mnt/huge,share=3Don = -numa node,memdev=3Dmem -mem-prealloc -chardev socket,path=3D/tmp/vm0_qga0.sock,server,nowait,id=3Dvm0_qga0 -device virtio= -serial -device virtserialport,chardev=3Dvm0_qga0,name=3Dorg.qemu.guest_agent.0 -vn= c :4 -drive file=3D/home/image/ubuntu2004.img 4.SSH VM0 and bind virtio-net to vfio-pci: echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages mkdir -p /mnt/huge mount -t hugetlbfs nodev /mnt/hugemodprobe vfio-pci echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode dpdk-devbind.py --force --bind=3Dvfio-pci 0000:00:04.0 5.Start testpmd in VM0: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i --nb-cores=3D1 --txd=3D1024 --rxd=3D1024 testpmd>set fwd mac testpmd>start 6.Use pktgen to send packets, and record the throughput. Show the output from the previous commands. +--------------+----------------------+------------------+------------+----= ------------+ | FrameSize(B) | Mode | Throughput(Mpps) | % linerate |=20= =20=20=20 Cycle | +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ | 64 | virtio0.95 vector_rx | 4.314 | 7.247 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 128 | virtio0.95 vector_rx | 4.244 | 12.563 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 256 | virtio0.95 vector_rx | 4.576 | 25.262 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 512 | virtio0.95 vector_rx | 3.435 | 36.544 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 1024 | virtio0.95 vector_rx | 2.695 | 56.268 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 1280 | virtio0.95 vector_rx | 2.490 | 64.731 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 1518 | virtio0.95 vector_rx | 2.248 | 69.140 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+=20 [Expected Result] Explain what is the expected result in text or as an example output: +--------------+----------------------+------------------+------------+----= ------------+ | FrameSize(B) | Mode | Throughput(Mpps) | % linerate |=20= =20=20=20 Cycle | +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+ | 64 | virtio0.95 vector_rx | 5.642 | 9.478 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 128 | virtio0.95 vector_rx | 5.493 | 16.259 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 256 | virtio0.95 vector_rx | 5.004 | 27.620 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 512 | virtio0.95 vector_rx | 3.343 | 35.565 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 1024 | virtio0.95 vector_rx | 2.664 | 55.629 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 1280 | virtio0.95 vector_rx | 2.500 | 64.990 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+ | 1518 | virtio0.95 vector_rx | 2.309 | 71.028 | Bef= ore Restart | +--------------+----------------------+------------------+------------+----= ------------+=20 Regression Is this issue a regression: (Y/N) Y Version the regression was introduced: Specify git id if known. Bad commit id: commit abfe2cb0b40b3ceeb44df642c9d28a06bdfc9fb4 (HEAD) Author: Luca Boccassi Date: Mon Nov 28 14:11:25 2022 +0000 Revert "mempool: fix get objects from mempool with cache" As requested by the author This reverts commit 26cb4c81b552594292f7c744afb904f01700dfe8. --=20 You are receiving this mail because: You are the assignee for the bug.= --16808588960.47b2EC.626079 Date: Fri, 7 Apr 2023 11:14:56 +0200 MIME-Version: 1.0 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Bugzilla-URL: http://bugs.dpdk.org/ Auto-Submitted: auto-generated X-Auto-Response-Suppress: All
Bug ID 1212
Summary [dpdk-21.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu= _vector_rx_mac: performance drop about 23.5% when send small packets
Product DPDK
Version 20.11
Hardware All
OS All
Status UNCONFIRMED
Severity normal
Priority Normal
Component vhost/virtio
Assignee dev@dpdk.org
Reporter weix.ling@intel.com
Target Milestone ---

[Environment]

DPDK version: Use make showversion or for a non-released version: git remot=
e -v
&& git show-ref --heads
 20.11.8-rc1
Other software versions: QEMU-7.0.0.
OS: Ubuntu 22.04.1 LTS/Linux 5.15.45-051545-generic
Compiler: gcc version 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04)
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Intel Ethernet Controller XL710 for 40GbE QSFP+ 1583
NIC firmware: i40e-2.22.18/9.20 0x8000d893 1.3353.0

[Test Setup]
Steps to reproduce
List the steps to reproduce the issue.

1.Bind 1 NIC port to vfio-pci

dpdk-devbind.py --force --bind=3Dvfio-pci 0000:af:00.0

2.Start vhost-user:

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 28,29,30 -n 4 -a 0000:af:00.=
0=20
--file-prefix=3Dvhost_2352949_20230407162534  --vdev
'net_vhost0,iface=3Dvhost-net,queues=3D1' -- -i --nb-cores=3D1 --txd=3D1024=
 --rxd=3D1024

testpmd>set fwd mac
testpmd>start

3.Start VM0 with QEMU-7.0.0

taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-7.0.0/bin/qemu-system-x8=
6_64
 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
unix:/tmp/vm0_monitor.sock,server,nowait -netdev
user,id=3Dnttsip1,hostfwd=3Dtcp:10.239.252.220:6000-:22 -device
e1000,netdev=3Dnttsip1  -chardev socket,id=3Dchar0,path=3D./vhost-net -netd=
ev
type=3Dvhost-user,id=3Dnetdev0,chardev=3Dchar0,vhostforce -device
virtio-net-pci,netdev=3Dnetdev0,mac=3D52:54:00:00:00:01,disable-modern=3Dtr=
ue,mrg_rxbuf=3Doff,rx_queue_size=3D1024,tx_queue_size=3D1024
-cpu host -smp 8 -m 16384 -object
memory-backend-file,id=3Dmem,size=3D16384M,mem-path=3D/mnt/huge,share=3Don =
-numa
node,memdev=3Dmem -mem-prealloc -chardev
socket,path=3D/tmp/vm0_qga0.sock,server,nowait,id=3Dvm0_qga0 -device virtio=
-serial
-device virtserialport,chardev=3Dvm0_qga0,name=3Dorg.qemu.guest_agent.0 -vn=
c :4
-drive file=3D/home/image/ubuntu2004.img

4.SSH VM0 and bind virtio-net to vfio-pci:

echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/hugemodprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
dpdk-devbind.py --force --bind=3Dvfio-pci 0000:00:04.0

5.Start testpmd in VM0:

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i --nb-cores=3D1
--txd=3D1024 --rxd=3D1024

testpmd>set fwd mac
testpmd>start

6.Use pktgen to send packets, and record the throughput.

Show the output from the previous commands.

+--------------+----------------------+------------------+------------+----=
------------+
| FrameSize(B) |         Mode         | Throughput(Mpps) | % linerate |=20=
=20=20=20
Cycle      |
+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+
| 64           | virtio0.95 vector_rx | 4.314            | 7.247      | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 128          | virtio0.95 vector_rx | 4.244            | 12.563     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 256          | virtio0.95 vector_rx | 4.576            | 25.262     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 512          | virtio0.95 vector_rx | 3.435            | 36.544     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 1024         | virtio0.95 vector_rx | 2.695            | 56.268     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 1280         | virtio0.95 vector_rx | 2.490            | 64.731     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 1518         | virtio0.95 vector_rx | 2.248            | 69.140     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+=20

[Expected Result]
Explain what is the expected result in text or as an example output:

+--------------+----------------------+------------------+------------+----=
------------+
| FrameSize(B) |         Mode         | Throughput(Mpps) | % linerate |=20=
=20=20=20
Cycle      |
+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D+
| 64           | virtio0.95 vector_rx | 5.642            | 9.478      | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 128          | virtio0.95 vector_rx | 5.493            | 16.259     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 256          | virtio0.95 vector_rx | 5.004            | 27.620     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 512          | virtio0.95 vector_rx | 3.343            | 35.565     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 1024         | virtio0.95 vector_rx | 2.664            | 55.629     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 1280         | virtio0.95 vector_rx | 2.500            | 64.990     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+
| 1518         | virtio0.95 vector_rx | 2.309            | 71.028     | Bef=
ore
Restart |
+--------------+----------------------+------------------+------------+----=
------------+=20
Regression
Is this issue a regression: (Y/N) Y

Version the regression was introduced: Specify git id if known.

Bad commit id:

commit abfe2cb0b40b3ceeb44df642c9d28a06bdfc9fb4 (HEAD)
Author: Luca Boccassi <bluca@=
;debian.org>
Date:   Mon Nov 28 14:11:25 2022 +0000

    Revert "mempool: fix get objects from mempool with cache"

    As requested by the author

    This reverts commit 26cb4c81b552594292f7c744afb904f01700dfe8.
          


You are receiving this mail because:
  • You are the assignee for the bug.
=20=20=20=20=20=20=20=20=20=20
= --16808588960.47b2EC.626079--