* [Bug 1212] [dpdk-21.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets
@ 2023-04-07 9:14 bugzilla
2023-04-11 15:53 ` [Bug 1212] [dpdk-20.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: " bugzilla
0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2023-04-07 9:14 UTC (permalink / raw)
To: dev
[-- Attachment #1: Type: text/plain, Size: 6762 bytes --]
https://bugs.dpdk.org/show_bug.cgi?id=1212
Bug ID: 1212
Summary: [dpdk-21.11.8]pvp_qemu_multi_paths_port_restart:perf_p
vp_qemu_vector_rx_mac: performance drop about 23.5%
when send small packets
Product: DPDK
Version: 20.11
Hardware: All
OS: All
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: vhost/virtio
Assignee: dev@dpdk.org
Reporter: weix.ling@intel.com
Target Milestone: ---
[Environment]
DPDK version: Use make showversion or for a non-released version: git remote -v
&& git show-ref --heads
20.11.8-rc1
Other software versions: QEMU-7.0.0.
OS: Ubuntu 22.04.1 LTS/Linux 5.15.45-051545-generic
Compiler: gcc version 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04)
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Intel Ethernet Controller XL710 for 40GbE QSFP+ 1583
NIC firmware: i40e-2.22.18/9.20 0x8000d893 1.3353.0
[Test Setup]
Steps to reproduce
List the steps to reproduce the issue.
1.Bind 1 NIC port to vfio-pci
dpdk-devbind.py --force --bind=vfio-pci 0000:af:00.0
2.Start vhost-user:
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 28,29,30 -n 4 -a 0000:af:00.0
--file-prefix=vhost_2352949_20230407162534 --vdev
'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
3.Start VM0 with QEMU-7.0.0
taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64
-name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
unix:/tmp/vm0_monitor.sock,server,nowait -netdev
user,id=nttsip1,hostfwd=tcp:10.239.252.220:6000-:22 -device
e1000,netdev=nttsip1 -chardev socket,id=char0,path=./vhost-net -netdev
type=vhost-user,id=netdev0,chardev=char0,vhostforce -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024
-cpu host -smp 8 -m 16384 -object
memory-backend-file,id=mem,size=16384M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial
-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :4
-drive file=/home/image/ubuntu2004.img
4.SSH VM0 and bind virtio-net to vfio-pci:
echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/hugemodprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
dpdk-devbind.py --force --bind=vfio-pci 0000:00:04.0
5.Start testpmd in VM0:
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -- -i --nb-cores=1
--txd=1024 --rxd=1024
testpmd>set fwd mac
testpmd>start
6.Use pktgen to send packets, and record the throughput.
Show the output from the previous commands.
+--------------+----------------------+------------------+------------+----------------+
| FrameSize(B) | Mode | Throughput(Mpps) | % linerate |
Cycle |
+==============+======================+==================+============+================+
| 64 | virtio0.95 vector_rx | 4.314 | 7.247 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 128 | virtio0.95 vector_rx | 4.244 | 12.563 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 256 | virtio0.95 vector_rx | 4.576 | 25.262 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 512 | virtio0.95 vector_rx | 3.435 | 36.544 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 1024 | virtio0.95 vector_rx | 2.695 | 56.268 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 1280 | virtio0.95 vector_rx | 2.490 | 64.731 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 1518 | virtio0.95 vector_rx | 2.248 | 69.140 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
[Expected Result]
Explain what is the expected result in text or as an example output:
+--------------+----------------------+------------------+------------+----------------+
| FrameSize(B) | Mode | Throughput(Mpps) | % linerate |
Cycle |
+==============+======================+==================+============+================+
| 64 | virtio0.95 vector_rx | 5.642 | 9.478 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 128 | virtio0.95 vector_rx | 5.493 | 16.259 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 256 | virtio0.95 vector_rx | 5.004 | 27.620 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 512 | virtio0.95 vector_rx | 3.343 | 35.565 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 1024 | virtio0.95 vector_rx | 2.664 | 55.629 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 1280 | virtio0.95 vector_rx | 2.500 | 64.990 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
| 1518 | virtio0.95 vector_rx | 2.309 | 71.028 | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
Regression
Is this issue a regression: (Y/N) Y
Version the regression was introduced: Specify git id if known.
Bad commit id:
commit abfe2cb0b40b3ceeb44df642c9d28a06bdfc9fb4 (HEAD)
Author: Luca Boccassi <bluca@debian.org>
Date: Mon Nov 28 14:11:25 2022 +0000
Revert "mempool: fix get objects from mempool with cache"
As requested by the author
This reverts commit 26cb4c81b552594292f7c744afb904f01700dfe8.
--
You are receiving this mail because:
You are the assignee for the bug.
[-- Attachment #2: Type: text/html, Size: 8771 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
* [Bug 1212] [dpdk-20.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets
2023-04-07 9:14 [Bug 1212] [dpdk-21.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets bugzilla
@ 2023-04-11 15:53 ` bugzilla
0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2023-04-11 15:53 UTC (permalink / raw)
To: dev
[-- Attachment #1: Type: text/plain, Size: 519 bytes --]
https://bugs.dpdk.org/show_bug.cgi?id=1212
Luca Boccassi (luca.boccassi@gmail.com) changed:
What |Removed |Added
----------------------------------------------------------------------------
Resolution|--- |FIXED
Status|UNCONFIRMED |RESOLVED
--- Comment #6 from Luca Boccassi (luca.boccassi@gmail.com) ---
Fixed post-rc1 in 20.11.8
--
You are receiving this mail because:
You are the assignee for the bug.
[-- Attachment #2: Type: text/html, Size: 2770 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2023-04-11 15:53 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-04-07 9:14 [Bug 1212] [dpdk-21.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets bugzilla
2023-04-11 15:53 ` [Bug 1212] [dpdk-20.11.8]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: " bugzilla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).