DPDK patches and discussions
 help / color / mirror / Atom feed
From: Dikshant Chitkara <dchitkara@Airspan.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "Varghese, Vipin" <vipin.varghese@intel.com>,
	"users@dpdk.org" <users@dpdk.org>, "dev@dpdk.org" <dev@dpdk.org>,
	Amir Ilan <ailan@Airspan.com>, Veeresh Patil <vpatil@Airspan.com>
Subject: Re: [dpdk-dev] DPDK PDUMP Issue
Date: Tue, 28 Jul 2020 17:08:58 +0000	[thread overview]
Message-ID: <81129b4e5e284c4bb8f062b5e7df64c8@Airspan.com> (raw)
In-Reply-To: <20200728100350.3a48d364@hermes.lan>

Hi Stephen,

If that was the case, then how come pdump worked with testpmd?

See below testpmd with pdumg log:

Testpmd:

[root@flexran3 x86_64-native-linux-icc]# ./app/testpmd -c 0xf0 -n 4 -- -i --port-topology=chained
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Probing VFIO support...
EAL: PCI device 0000:41:00.0 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:41:00.1 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
EAL: PCI device 0000:88:00.1 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: 3C:FD:FE:CD:34:A4
Checking link statuses...
Done
testpmd>
Port 0: link state change event

testpmd>
testpmd>
testpmd> start tx_first
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 5 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 1) -> TX P=0/Q=0 (socket 1) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=32
      RX threshold registers: pthresh=8 hthresh=8  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=256 - TX free threshold=32
      TX threshold registers: pthresh=32 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=32
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
Invalid RX queue_id=0
    RX queue: 0
      RX desc=0 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=0 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 32             TX-dropped: 0             TX-total: 32
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.


PDUMP:

[root@flexran3 x86_64-native-linux-icc]# ./app/dpdk-pdump -d librte_pmd_i40e.so -d librte_pmd_pcap.so -- --pdump 'port=0,queue=*,tx-dev=/home/dchitkara/capture.pcap'
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_120725_2d88411f7f7d4
EAL: Probing VFIO support...
EAL: PCI device 0000:41:00.0 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:41:00.1 on NUMA socket 0
EAL:   probe driver: 8086:37d2 net_i40e
EAL: PCI device 0000:86:00.0 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.1 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.2 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:86:00.3 on NUMA socket 1
EAL:   probe driver: 8086:1572 net_i40e
EAL: PCI device 0000:88:00.0 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
EAL: PCI device 0000:88:00.1 on NUMA socket 1
EAL:   probe driver: 8086:158b net_i40e
Port 1 MAC: 02 70 63 61 70 00
 core (0), capture for (1) tuples
 - port 0 device ((null)) queue 65535
^C

Signal 2 received, preparing to exit...
##### PDUMP DEBUG STATS #####
 -packets dequeued:                     32
 -packets transmitted to vdev:          32
 -packets freed:                        0
[root@flexran3 x86_64-native-linux-icc]



-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org> 
Sent: 28 July 2020 22:34
To: Dikshant Chitkara <dchitkara@Airspan.com>
Cc: Varghese, Vipin <vipin.varghese@intel.com>; users@dpdk.org; dev@dpdk.org; Amir Ilan <ailan@Airspan.com>; Veeresh Patil <vpatil@Airspan.com>
Subject: Re: [dpdk-dev] DPDK PDUMP Issue

NOT FROM AIRSPAN - Caution - External from: stephen@networkplumber.org On Tue, 28 Jul 2020 16:41:38 +0000 Dikshant Chitkara <dchitkara@Airspan.com> wrote:

> Hi Stephen,
> Our system has 2 sockets as seen from below :
> 
> [root@flexran3 dchitkara]# lscpu
> Architecture:          x86_64
> CPU op-mode(s):        32-bit, 64-bit
> Byte Order:            Little Endian
> CPU(s):                80
> On-line CPU(s) list:   0-79
> Thread(s) per core:    2
> Core(s) per socket:    20
> Socket(s):             2
> NUMA node(s):          2

Did you configure hugepages on both NUMA nodes?
You might be able to get away with only doing one node that has the device attached but probably both need to have dedicated hugepages.

  reply	other threads:[~2020-07-29  8:51 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-14  8:40 Dikshant Chitkara
2020-07-14 16:29 ` Stephen Hemminger
2020-07-22  4:08   ` Varghese, Vipin
2020-07-28 14:51     ` Dikshant Chitkara
2020-07-28 16:23       ` Stephen Hemminger
2020-07-28 16:41         ` Dikshant Chitkara
2020-07-28 17:03           ` Stephen Hemminger
2020-07-28 17:08             ` Dikshant Chitkara [this message]
2020-07-29  0:59       ` Varghese, Vipin
2020-07-29  9:14         ` Dikshant Chitkara
2020-07-29  9:51           ` Varghese, Vipin
2020-07-29  9:56             ` Varghese, Vipin
2020-07-29 14:08             ` Dikshant Chitkara
2020-07-30  2:35               ` Varghese, Vipin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=81129b4e5e284c4bb8f062b5e7df64c8@Airspan.com \
    --to=dchitkara@airspan.com \
    --cc=ailan@Airspan.com \
    --cc=dev@dpdk.org \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    --cc=vipin.varghese@intel.com \
    --cc=vpatil@Airspan.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).