DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1043] [dpdk-22.07]vm2vm_virtio_net_perf_cbdma/vm2vm_split_ring_iperf_with_tso_and_cbdma_enable: iperf test no data between 2 VMs
@ 2022-06-28  9:04 bugzilla
  2022-08-05  7:05 ` bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2022-06-28  9:04 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=1043

            Bug ID: 1043
           Summary: [dpdk-22.07]vm2vm_virtio_net_perf_cbdma/vm2vm_split_ri
                    ng_iperf_with_tso_and_cbdma_enable: iperf test no data
                    between 2 VMs
           Product: DPDK
           Version: 22.03
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: weix.ling@intel.com
  Target Milestone: ---

[Environment]

DPDK version: Use make showversion or for a non-released version: git remote -v
&& git show-ref --heads
commit 7cac53f205ebd04d8ebd3ee6a9dd84f698d4ada3 (HEAD -> main, tag: v22.07-rc2,
origin/main, origin/HEAD)
Author: Thomas Monjalon <thomas@monjalon.net>
Date:   Mon Jun 27 04:03:44 2022 +0200    version: 22.07-rc2   
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Other software versions: QEMU-7.0.0
OS: Ubuntu 22.04 LTS/Linux 5.15.45-051545-generic
Compiler: gcc version 11.2.0 (Ubuntu 11.2.0-19ubuntu1)
Hardware platform:  Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: N/A
NIC firmware: N/A

[Test Setup]
Steps to reproduce
List the steps to reproduce the issue.

1. Bind 2 CBDMA channels to vfio-pci:

dpdk-devbind.py --force --bind=vfio-pci 0000:80:04.0 0000:80:04.1

2. Start vhost-tetpmd:

x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 28-36 -n 4 -a 0000:80:04.0 -a
0000:80:04.1 --file-prefix=vhost_247798_20220628141037   --vdev
'net_vhost0,iface=vhost-net0,queues=1,tso=1,dmas=[txq0;rxq0]' --vdev
'net_vhost1,iface=vhost-net1,queues=1,tso=1,dmas=[txq0;rxq0]' --iova=va -- -i
--nb-cores=2 --txd=1024 --rxd=1024 --txq=1 --rxq=1
--lcore-dma=[lcore29@0000:80:04.0,lcore30@0000:80:04.1]

start

3. Start VM0:

taskset -c 20,21,22,23,24,25,26,27 /home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64
 -name vm0 -enable-kvm -pidfile /tmp/.vm0.pid -daemonize -monitor
unix:/tmp/vm0_monitor.sock,server,nowait -netdev
user,id=nttsip1,hostfwd=tcp:10.239.252.220:6000-:22 -device
e1000,netdev=nttsip1  -chardev socket,id=char0,path=/root/dpdk/vhost-net0
-netdev type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=1 -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:01,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on
-cpu host -smp 8 -m 16384 -object
memory-backend-file,id=mem,size=16384M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm0_qga0.sock,server,nowait,id=vm0_qga0 -device virtio-serial
-device virtserialport,chardev=vm0_qga0,name=org.qemu.guest_agent.0 -vnc :4
-drive file=/home/image/ubuntu2004.img

4. Start VM1:

taskset -c 48,49,50,51,52,53,54,55 /home/QEMU/qemu-7.0.0/bin/qemu-system-x86_64
 -name vm1 -enable-kvm -pidfile /tmp/.vm1.pid -daemonize -monitor
unix:/tmp/vm1_monitor.sock,server,nowait -device e1000,netdev=nttsip1  -netdev
user,id=nttsip1,hostfwd=tcp:10.239.252.220:6001-:22 -chardev
socket,id=char0,path=/root/dpdk/vhost-net1 -netdev
type=vhost-user,id=netdev0,chardev=char0,vhostforce,queues=1 -device
virtio-net-pci,netdev=netdev0,mac=52:54:00:00:00:02,disable-modern=false,mrg_rxbuf=on,csum=on,guest_csum=on,host_tso4=on,guest_tso4=on,guest_ecn=on
-cpu host -smp 8 -m 16384 -object
memory-backend-file,id=mem,size=16384M,mem-path=/mnt/huge,share=on -numa
node,memdev=mem -mem-prealloc -chardev
socket,path=/tmp/vm1_qga0.sock,server,nowait,id=vm1_qga0 -device virtio-serial
-device virtserialport,chardev=vm1_qga0,name=org.qemu.guest_agent.0 -vnc :5
-drive file=/home/image/ubuntu2004_2.img

5. SSH login VM0 and VM1 to config IP:

[VM0]
ssh root@10.239.252.220 -p 6000
ifconfig ens4 1.1.1.1

[VM1]
ssh root@10.239.252.220 -p 6001
ifconfig ens4 1.1.1.2

6. Use iperf test tool to test between 2 VMs

[VM0]
iperf -s -i 1

[VM1]
iperf -c 1.1.1.1 -i 1 -t 60 


Show the output from the previous commands.

There is no any data.


[Expected Result]

Explain what is the expected result in text or as an example output:

root@virtiovm:~# iperf -c 1.1.1.1 -i 1 -t 60
------------------------------------------------------------
Client connecting to 1.1.1.1, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  1] local 1.1.1.2 port 42240 connected with 1.1.1.1 port 5001
[ ID] Interval       Transfer     Bandwidth
[  1] 0.0000-1.0000 sec  2.45 GBytes  21.0 Gbits/sec
[  1] 1.0000-2.0000 sec  2.42 GBytes  20.8 Gbits/sec
[  1] 2.0000-3.0000 sec  2.39 GBytes  20.6 Gbits/sec
[  1] 3.0000-4.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 4.0000-5.0000 sec  2.33 GBytes  20.0 Gbits/sec
[  1] 5.0000-6.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 6.0000-7.0000 sec  2.39 GBytes  20.6 Gbits/sec
[  1] 7.0000-8.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 8.0000-9.0000 sec  2.35 GBytes  20.2 Gbits/sec
[  1] 9.0000-10.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 10.0000-11.0000 sec  2.37 GBytes  20.4 Gbits/sec
[  1] 11.0000-12.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 12.0000-13.0000 sec  2.42 GBytes  20.7 Gbits/sec
[  1] 13.0000-14.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 14.0000-15.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 15.0000-16.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 16.0000-17.0000 sec  2.54 GBytes  21.8 Gbits/sec
[  1] 17.0000-18.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 18.0000-19.0000 sec  2.39 GBytes  20.6 Gbits/sec
[  1] 19.0000-20.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 20.0000-21.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 21.0000-22.0000 sec  2.42 GBytes  20.8 Gbits/sec
[  1] 22.0000-23.0000 sec  2.51 GBytes  21.6 Gbits/sec
[  1] 23.0000-24.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 24.0000-25.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 25.0000-26.0000 sec  2.43 GBytes  20.9 Gbits/sec
[  1] 26.0000-27.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 27.0000-28.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 28.0000-29.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 29.0000-30.0000 sec  2.38 GBytes  20.4 Gbits/sec
[  1] 30.0000-31.0000 sec  2.38 GBytes  20.5 Gbits/sec
[  1] 31.0000-32.0000 sec  2.35 GBytes  20.2 Gbits/sec
[  1] 32.0000-33.0000 sec  2.39 GBytes  20.5 Gbits/sec
[  1] 33.0000-34.0000 sec  2.39 GBytes  20.5 Gbits/sec
[  1] 34.0000-35.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 35.0000-36.0000 sec  2.33 GBytes  20.0 Gbits/sec
[  1] 36.0000-37.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 37.0000-38.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 38.0000-39.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 39.0000-40.0000 sec  2.39 GBytes  20.5 Gbits/sec
[  1] 40.0000-41.0000 sec  2.45 GBytes  21.0 Gbits/sec
[  1] 41.0000-42.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 42.0000-43.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 43.0000-44.0000 sec  2.40 GBytes  20.7 Gbits/sec
[  1] 44.0000-45.0000 sec  2.41 GBytes  20.7 Gbits/sec
[  1] 45.0000-46.0000 sec  2.40 GBytes  20.6 Gbits/sec
[  1] 46.0000-47.0000 sec  2.38 GBytes  20.5 Gbits/sec
[  1] 47.0000-48.0000 sec  2.45 GBytes  21.0 Gbits/sec
[  1] 48.0000-49.0000 sec  2.42 GBytes  20.8 Gbits/sec
[  1] 49.0000-49.6028 sec  1.49 GBytes  21.2 Gbits/sec
[  1] 0.0000-49.6028 sec   119 GBytes  20.7 Gbits/sec
root@virtiovm:~# 
Regression
Is this issue a regression: (Y/N) Y

Version the regression was introduced: Specify git id if known.

Bad commit:

commit 3a6ee8dafb21d7a55af59b573195a9dc18732476
Author: Maxime Coquelin <maxime.coquelin@redhat.com>
Date:   Wed Jun 8 14:49:43 2022 +0200

    net/vhost: enable compliant offloading mode

    This patch enables the compliant offloading flags mode by
    default, which prevents the Rx path to set Tx offload flags,
    which is illegal. A new legacy-ol-flags devarg is introduced
    to enable the legacy behaviour.

    Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
    Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug 1043] [dpdk-22.07]vm2vm_virtio_net_perf_cbdma/vm2vm_split_ring_iperf_with_tso_and_cbdma_enable: iperf test no data between 2 VMs
  2022-06-28  9:04 [Bug 1043] [dpdk-22.07]vm2vm_virtio_net_perf_cbdma/vm2vm_split_ring_iperf_with_tso_and_cbdma_enable: iperf test no data between 2 VMs bugzilla
@ 2022-08-05  7:05 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2022-08-05  7:05 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=1043

lingwei (weix.ling@intel.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |RESOLVED
         Resolution|---                         |FIXED

--- Comment #2 from lingwei (weix.ling@intel.com) ---
Verified based on DPDK-22.07-rc2 with the local patch PASSED.

OS: Ubuntu 22.04 LTS/Linux 5.15.45-051545-generic

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-08-05  7:05 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-28  9:04 [Bug 1043] [dpdk-22.07]vm2vm_virtio_net_perf_cbdma/vm2vm_split_ring_iperf_with_tso_and_cbdma_enable: iperf test no data between 2 VMs bugzilla
2022-08-05  7:05 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).