DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1228] [dpdk-21.11.4]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets
@ 2023-05-11  8:11 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2023-05-11  8:11 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 7129 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1228

            Bug ID: 1228
           Summary: [dpdk-21.11.4]pvp_qemu_multi_paths_port_restart:perf_p
                    vp_qemu_vector_rx_mac: performance drop about 23.5%
                    when send small packets
           Product: DPDK
           Version: 21.11
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: weix.ling@intel.com
  Target Milestone: ---

[Environment]
DPDK version: 
Use make showversion or for a non-released version: git remote -v && git
show-ref --heads
 21.11.4-rc1
Other software versions: QEMU-7.0.0.
OS: Ubuntu 22.04.1 LTS/Linux 5.15.45-051545-generic
Compiler: gcc version 11.3.0 (Ubuntu 11.3.0-1ubuntu1~22.04)
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Intel Ethernet Controller XL710 for 40GbE QSFP+ 1583
NIC firmware: i40e-2.22.18/9.20 0x8000d893 1.3353.0

[Test Setup]
Steps to reproduce

List the steps to reproduce the issue.

1.Bind 1 NIC port to vfio-pci

dpdk-devbind.py --force --bind=vfio-pci 0000:18:00.0

2.View the numa node of the NIC port

root@dut220:~# cat /sys/bus/pci/devices/0000\:18\:00.0/numa_node
0

3.View the lcore of the server

root@dut220:~# /root/dpdk/usertools/cpu_layout.py
======================================================================
Core and Socket Information (as reported by '/sys/devices/system/cpu')
======================================================================cores = 
[0, 1, 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 24,
25, 26, 27, 28, 29, 30]
sockets =  [0, 1]        
Socket 0          Socket 1
        --------          --------
Core 0  [0, 56]           [28, 84]
Core 1  [1, 57]           [29, 85]
Core 2  [2, 58]           [30, 86]
Core 3  [3, 59]           [31, 87]
Core 4  [4, 60]           [32, 88]
Core 5  [5, 61]           [33, 89]
Core 6  [6, 62]           [34, 90]
Core 8  [7, 63]           [35, 91]
Core 9  [8, 64]           [36, 92]
Core 10 [9, 65]           [37, 93]
Core 11 [10, 66]          [38, 94]
Core 12 [11, 67]          [39, 95]
Core 13 [12, 68]          [40, 96]
Core 14 [13, 69]          [41, 97]
Core 16 [14, 70]          [42, 98]
Core 17 [15, 71]          [43, 99]
Core 18 [16, 72]          [44, 100]
Core 19 [17, 73]          [45, 101]
Core 20 [18, 74]          [46, 102]
Core 21 [19, 75]          [47, 103]
Core 22 [20, 76]          [48, 104]
Core 24 [21, 77]          [49, 105]
Core 25 [22, 78]          [50, 106]
Core 26 [23, 79]          [51, 107]
Core 27 [24, 80]          [52, 108]
Core 28 [25, 81]          [53, 109]
Core 29 [26, 82]          [54, 110]
Core 30 [27, 83]          [55, 111]


4.Start vhost-user with the lcores same numa with the NIC port(eg: on scoket
0):

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 18,19 -n 4 -a 0000:18:00.0 
--file-prefix=vhost_2352949_20230407162534  --vdev
'net_vhost0,iface=vhost-net,queues=1' -- -i --nb-cores=1 --txd=1024 --rxd=1024

testpmd>set fwd mac
testpmd>start

5.Start VM0 with QEMU-7.0.0 with the lcores different numa with the NIC
port(eg: on scoket 1):  

taskset -c 30,31,32,33,34,35,36,37 /home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64
 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
unix:/tmp/vm0_monitor.sock,server,nowait -netdev
user,id=nttsip1,hostfwd=tcp:10.239.252.220:6000-:22 -device
e1000,netdev=nttsip1  -chardev socket,id=char0,path=./vhost-net -netdev
type=vhost-user,id=netdev0,chardev=char0,vhostforce -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=true,mrg_rxbuf=off,rx_queue_size=1024,tx_queue_size=1024
-cpu host -smp 8 -m 16384 -object
memory-backend-file,id=mem,size=16384M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial
-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :4
-drive file=/home/image/ubuntu2004.img

6.SSH VM0 and bind virtio-net to vfio-pci:

echo 0 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
echo 1024 >
/sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages
mkdir -p /mnt/huge
mount -t hugetlbfs nodev /mnt/huge
modprobe vfio-pci
echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
dpdk-devbind.py --force --bind=vfio-pci 0000:00:04.0

7.Start testpmd in VM0:

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0x3 -n 3 -a
0000:00:04.0,vectorized=1 -- -i --nb-cores=1 --txd=1024 --rxd=1024

testpmd>set fwd mac
testpmd>start

6.Use pktgen to send packets, and record the throughput.

Show the output from the previous commands.

+--------------+----------------------+------------------+------------+----------------+
| FrameSize(B) |         Mode         | Throughput(Mpps) | % linerate |    
Cycle      |
+==============+======================+==================+============+================+
| 64           | virtio0.95 vector_rx | 4.314            | 7.247      | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
Expected Result

Explain what is the expected result in text or as an example output:

+--------------+----------------------+------------------+------------+----------------+
| FrameSize(B) |         Mode         | Throughput(Mpps) | % linerate |    
Cycle      |
+==============+======================+==================+============+================+
| 64           | virtio0.95 vector_rx | 5.642            | 9.478      | Before
Restart |
+--------------+----------------------+------------------+------------+----------------+
Regression

Is this issue a regression: (Y/N) Y

Version the regression was introduced: Specify git id if known.

commit c41493361c87e730459ead9311c68528eb0874aa (HEAD)
Author: Boleslav Stankevich <boleslav.stankevich@oktetlabs.ru>
Date:   Fri Mar 3 14:19:29 2023 +0300

    net/virtio: deduce IP length for TSO checksum

    [ upstream commit d069c80a5d8c0a05033932421851cdb7159de0df ]

    The length of TSO payload could not fit into 16 bits provided by the
    IPv4 total length and IPv6 payload length fields. Thus, deduce it
    from the length of the packet.

    Fixes: 696573046e9e ("net/virtio: support TSO")

    Signed-off-by: Boleslav Stankevich <boleslav.stankevich@oktetlabs.ru>
    Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
    Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Stack Trace or Log
# Add a long stack trace or log output in a noformat block like this.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 9357 bytes --]

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2023-05-11  8:11 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-11  8:11 [Bug 1228] [dpdk-21.11.4]pvp_qemu_multi_paths_port_restart:perf_pvp_qemu_vector_rx_mac: performance drop about 23.5% when send small packets bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).