DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1261] [dpdk 23.07] the dpdk-testpmd (based on VF) 's  throughput is 0
@ 2023-07-07  7:41 bugzilla
  2023-07-07  8:42 ` bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2023-07-07  7:41 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 6243 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1261

            Bug ID: 1261
           Summary: [dpdk 23.07] the dpdk-testpmd (based on VF) 's
                    throughput is 0
           Product: DPDK
           Version: 23.07
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: major
          Priority: Normal
         Component: testpmd
          Assignee: dev@dpdk.org
          Reporter: yanghliu@redhat.com
  Target Milestone: ---

Description of problem:
start a dpdk-testpmd based on VF , do the Moongen Throughput tests, the
dpdk-testpmd throughput is 0 

Version-Release number of selected component (if applicable):
host:
Ethernet Controller 10-Gigabit X540-AT2
5.14 kernel


How reproducible:
100%

Steps to Reproduce:
1. setup the host kernel option
# grubby --args="iommu=pt intel_iommu=on default_hugepagesz=1G"
--update-kernel=`grubby --default-kernel` 
# reboot

2. setup hugepage number
# echo 20 >
/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/nr_hugepages
# echo 20 >
/sys/devices/system/node/node1/hugepages/hugepages-1048576kB/nr_hugepages

3. create VF and setup VF mac address

# /usr/sbin/ifconfig ens3f0  up
# echo 2 > /sys/bus/pci/devices/0000\:5e\:00.0/sriov_numvfs
# readlink /sys/bus/pci/devices/0000:5e:00.0/virtfn*
# ip link set enp94s0f0 vf 0 mac 88:66:da:5f:dd:02
# dpdk-devbind.py --bind=vfio-pci 0000:5e:10.0
# /usr/sbin/ifconfig ens3f1 up
# echo 2 > /sys/bus/pci/devices/0000\:5e\:00.1/sriov_numvfs
# readlink /sys/bus/pci/devices/0000:5e:00.1/virtfn*
# ip link set enp94s0f1 vf 0 mac 88:66:da:5f:dd:03
# dpdk-devbind.py --bind=vfio-pci 0000:5e:10.1


4. start a dpdk-testpmd on the host
# /usr/local/bin/dpdk-testpmd -l 2,4,6 -n 4  -- --nb-cores=2 -i --disable-rss
--rxd=512 --txd=512 --rxq=1 --txq=1 
EAL: Detected CPU lcores: 64
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ixgbe_vf (8086:1515) device: 0000:5e:10.0 (socket 0)
EAL: Probe PCI driver: net_ixgbe_vf (8086:1515) device: 0000:5e:10.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 88:66:DA:5F:DD:02
Configuring Port 1 (socket 0)
Port 1: 88:66:DA:5F:DD:03
Checking link statuses...
Done
testpmd> set fwd mac retry
Set mac packet forwarding mode with retry
testpmd> vlan set strip on 0
testpmd> vlan set strip on 1
testpmd> start
mac packet forwarding with retry - ports=2 - cores=2 - streams=2 - NUMA support
enabled, MP allocation mode: native
TX retry num: 64, delay between TX retries: 1us
Logical Core 4 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
Logical Core 6 (socket 0) forwards packets on 1 streams:
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  mac packet forwarding with retry packets/burst=32
  nb forwarding cores=2 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x1
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x1 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x1
    TX queue: 0
      TX desc=512 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=32



5. Use Moongen to do the DPDK VF throughput tests

# ./build/MoonGen throughput.lua 88:66:da:5f:dd:02 88:66:da:5f:dd:03 


6. check the dpdk-testpmd all ports' throughput 

testpmd> show port stats all 

  ######################## NIC statistics for port 0  ########################
  RX-packets: 511        RX-missed: 0          RX-bytes:  30660
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

  ######################## NIC statistics for port 1  ########################
  RX-packets: 511        RX-missed: 0          RX-bytes:  31208
  RX-errors: 0
  RX-nombuf:  0         
  TX-packets: 0          TX-errors: 0          TX-bytes:  0

  Throughput (since last show)
  Rx-pps:            0          Rx-bps:            0
  Tx-pps:            0          Tx-bps:            0
  ############################################################################

# ./build/MoonGen throughput.lua 88:66:da:5f:dd:02 88:66:da:5f:dd:03 
...
[Device: id=1] Received 88 packets with 15328 bytes payload (including CRC).
[Device: id=1] Received 0.000000 (StdDev 0.000000) Mpps, 0.000205 (StdDev
0.000728) MBit/s, 0.000228 (StdDev 0.000799) MBit/s wire rate on average.
loop count: 136345435  frames dropped: 8589762317 (100.0000%)
[Device: id=0] Received 74 packets with 12908 bytes payload (including CRC).
[Device: id=0] Received 0.000000 (StdDev 0.000000) Mpps, 0.000172 (StdDev
0.000661) MBit/s, 0.000192 (StdDev 0.000723) MBit/s wire rate on average.
loop count: 132644066  frames dropped: 8356576084 (100.0000%)
Finshed final validation of the maximum frame rate 0.00 (millions per second)
with a frame size of 64

Final validation failed

7. Repeat the above test with dpdk-21.11-2 , the dpdk-testpmd  can forward
package successfully

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 8162 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [Bug 1261] [dpdk 23.07] the dpdk-testpmd (based on VF) 's  throughput is 0
  2023-07-07  7:41 [Bug 1261] [dpdk 23.07] the dpdk-testpmd (based on VF) 's throughput is 0 bugzilla
@ 2023-07-07  8:42 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2023-07-07  8:42 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 788 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1261

David Marchand (david.marchand@redhat.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |RESOLVED
         Resolution|---                         |DUPLICATE
                 CC|                            |david.marchand@redhat.com

--- Comment #1 from David Marchand (david.marchand@redhat.com) ---
Thank you for the report.

I logged on this system, reproduced the issue on the main branch.
The fix from bz #1259 restored packet processing.

*** This bug has been marked as a duplicate of bug 1259 ***

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 3148 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2023-07-07  8:42 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-07  7:41 [Bug 1261] [dpdk 23.07] the dpdk-testpmd (based on VF) 's throughput is 0 bugzilla
2023-07-07  8:42 ` bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).