DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [Bug 772] [dpdk-21.08] vswitch_sample_cbdma/vm2vm_fwd_test_with_two_cbdma_channels: forward 8k packet failed when relaunch dpdk-vhost
@ 2021-08-04  7:01 bugzilla
  0 siblings, 0 replies; only message in thread
From: bugzilla @ 2021-08-04  7:01 UTC (permalink / raw)
  To: dev

https://bugs.dpdk.org/show_bug.cgi?id=772

            Bug ID: 772
           Summary: [dpdk-21.08]
                    vswitch_sample_cbdma/vm2vm_fwd_test_with_two_cbdma_cha
                    nnels: forward 8k packet failed when relaunch
                    dpdk-vhost
           Product: DPDK
           Version: 21.08
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: vhost/virtio
          Assignee: dev@dpdk.org
          Reporter: weix.ling@intel.com
  Target Milestone: ---

Environment
DPDK version: 
 21.08-rc2:02e077f35dbc9821dfcb32714ad1096a3ee58d08
Other software versions: name/version for QEMU, OVS, etc. Repeat as required.
OS: Ubuntu 20.04.2 LTS/Linux 5.11.16-051116-generic
Compiler: gcc version 9.3.0 (Ubuntu 9.3.0-17ubuntu1~20.04)
Hardware platform: Intel(R) Xeon(R) Platinum 8280M CPU @ 2.70GHz
NIC hardware: Controller XL710 for 40GbE QSFP+ [8086:1583] FVL-40g
NIC firmware & driver: 
driver: i40e
version: 5.11.16-051116-generic
firmware-version: 8.30 0x8000a4ae 1.2926.0
Test Setup
Steps to reproduce
List the steps to reproduce the issue.

1. Bind 1 40G NIC port and 2 CBDMA to igb_uio
dpdk-devbind.py --force --bind=igb_uio 0000:af:00.0 0000:80:04.0 0000:80:04.1

2. Start the dpdk-vhost
x86_64-native-linuxapp-gcc/examples/dpdk-vhost  -c 0x30000000 -n 4  -a
0000:af:00.0 -a 0000:80:04.0 -a 0000:80:04.1 -- -p 0x1 --mergeable 1 --vm2vm 1
--dma-type ioat --stats 1 --socket-file ./vhost-net0 --socket-file ./vhost-net1
--dmas [txd0@0000:80:04.0,txd1@0000:80:04.1] --client

3. Start virtio-user0:
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 30,31 -n 4 
--file-prefix=testpmd0 --no-pci 
--vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,packed_vq=1,server=1
-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1

4. Start virtio-user1:
x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 32,33 -n 4 
--file-prefix=testpmd1 --no-pci 
--vdev=net_virtio_user0,mac=00:11:22:33:44:11,path=./vhost-net1,queues=1,mrg_rxbuf=1,in_order=0,vectorized=1,server=1
-- -i --rxq=1 --txq=1 --txd=1024 --rxd=1024 --nb-cores=1

5. Set fwd mode and start tx_first on virtio-user0:
set fwd mac
start tx_first
stop
set eth-peer 0 00:11:22:33:44:11
start

6. Set fwd mode and start tx_first and send 8k packets on virtio-user1:
set fwd mac
set eth-peer 0 00:11:22:33:44:10
stop

Show the output from the previous commands.

1. After execute setp 6, the stats on virtio-user1 as follows:

testpmd> show port stats all
  ######################## NIC statistics for port 0  ########################
  RX-packets: 62880      RX-missed: 0          RX-bytes:  503040000
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 62912      TX-errors: 0          TX-bytes:  503296000  Throughput
(since last show)
  Rx-pps:       113595          Rx-bps:   7270099464
  Tx-pps:       113595          Tx-bps:   7270099464
  ############################################################################

2. After execute setp 13, the stats on virtio-user1 as follows:

testpmd> show port stats all
  ######################## NIC statistics for port 0  ########################
  RX-packets: 32         RX-missed: 0          RX-bytes:  256000
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 32         TX-errors: 0          TX-bytes:  256000  Throughput
(since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################


Expected Result
Explain what is the expected result in text or as an example output:

1. After execute setp 6, the stats on virtio-user1 expected as follows:

testpmd> show port stats all
  ######################## NIC statistics for port 0  ########################
  RX-packets: 62880      RX-missed: 0          RX-bytes:  503040000
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 62912      TX-errors: 0          TX-bytes:  503296000  Throughput
(since last show)
  Rx-pps:       113595          Rx-bps:   7270099464
  Tx-pps:       113595          Tx-bps:   7270099464
  ############################################################################


2. After execute setp 13, the stats on virtio-user1 expected as follows:

testpmd> show port stats all
  ######################## NIC statistics for port 0  ########################
  RX-packets: 62880      RX-missed: 0          RX-bytes:  503040000
  RX-errors: 0
  RX-nombuf:  0
  TX-packets: 62912      TX-errors: 0          TX-bytes:  503296000  Throughput
(since last show)
  Rx-pps:       113595          Rx-bps:   7270099464
  Tx-pps:       113595          Tx-bps:   7270099464
  ############################################################################

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2021-08-04  7:01 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-08-04  7:01 [dpdk-dev] [Bug 772] [dpdk-21.08] vswitch_sample_cbdma/vm2vm_fwd_test_with_two_cbdma_channels: forward 8k packet failed when relaunch dpdk-vhost bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).