DPDK usage discussions
 help / color / mirror / Atom feed
From: Ariba Ehtesham <ariba@dreambigsemi.com>
To: Stephen Hemminger <stephen@networkplumber.org>
Cc: "reshma.pattan@intel.com" <reshma.pattan@intel.com>,
	"users@dpdk.org" <users@dpdk.org>,
	Haider Ali <haider@dreambigsemi.com>
Subject: Re: Unable to run dpdk-dumpcap application
Date: Mon, 13 Jun 2022 06:43:29 +0000	[thread overview]
Message-ID: <SN4PR22MB320512FC9FE117DE5AB20B48CDAB9@SN4PR22MB3205.namprd22.prod.outlook.com> (raw)
In-Reply-To: <20220610075543.739801d7@hermes.local>

[-- Attachment #1: Type: text/plain, Size: 14961 bytes --]

Hi ,
I have run dpdk-proc with mellanox connect x-5  and intel ixgbe my steps are :

First :
sudo ./build/app/dpdk-testpmd -l 0-3 -n 4  -a 0000:42:00.0 --proc-type=primary -- -i --rxq=8 --txq=8
second:

sudo ./build/app/dpdk-proc-info -a 42:00.0 --   -m -p 0x1

Result :
(for mellanox)
sudo ./build/app/dpdk-proc-info -a 0000:42:00.0 -- -m -p 0x1
----------- MEMORY_SEGMENTS -----------
Segment 0-0: IOVA:0x140000000, len:1073741824, virt:0x140000000, socket_id:0, hugepage_sz:1073741824, nchannel:4, nrank:0 fd:20
Segment 2-0: IOVA:0x11c0000000, len:1073741824, virt:0x11c0000000, socket_id:1, hugepage_sz:1073741824, nchannel:4, nrank:0 fd:21
--------- END_MEMORY_SEGMENTS ---------
------------ MEMORY_ZONES -------------
Zone 0: name:<mlx5_pmd_shared_data>, len:0x40, virt:0x17ffe9cc0, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 1: name:<rte_eth_dev_data>, len:0x36840, virt:0x17ff7f480, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 2: name:<rte_pdump_stats>, len:0x400040, virt:0x17d5b6100, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 3: name:<MP_mb_pool_0>, len:0x182100, virt:0x17cf30e40, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 4: name:<RG_MP_mb_pool_0>, len:0x200180, virt:0x17cd30b80, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 5: name:<MP_mb_pool_0_0>, len:0x18333940, virt:0x1649fd1c0, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 6: name:<MP_mb_pool_1>, len:0x182100, virt:0x11ffdebe80, socket_id:1, flags:0
physical segments used:
addr: 0x11c0000000 iova: 0x11c0000000 len: 0x40000000 pagesz: 0x40000000
Zone 7: name:<RG_MP_mb_pool_1>, len:0x200180, virt:0x11ffbebc80, socket_id:1, flags:0
physical segments used:
addr: 0x11c0000000 iova: 0x11c0000000 len: 0x40000000 pagesz: 0x40000000
Zone 8: name:<MP_mb_pool_1_0>, len:0x18333940, virt:0x11e78b82c0, socket_id:1, flags:0
physical segments used:
addr: 0x11c0000000 iova: 0x11c0000000 len: 0x40000000 pagesz: 0x40000000
Zone 9: name:<rte_mbuf_dyn>, len:0xc0, virt:0x1649cb1c0, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 10: name:<RTE_METRICS>, len:0x15040, virt:0x1649af1c0, socket_id:0, flags:0
physical segments used:
addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
---------- END_MEMORY_ZONES -----------
------------- TAIL_QUEUES -------------
Tailq 0: qname:<RTE_FIB>, tqh_first:(nil), tqh_last:0x100004488
Tailq 1: qname:<RTE_FIB6>, tqh_first:(nil), tqh_last:0x1000044b8
Tailq 2: qname:<RTE_IPSEC_SAD>, tqh_first:(nil), tqh_last:0x1000044e8
Tailq 3: qname:<RTE_STACK>, tqh_first:(nil), tqh_last:0x100004518
Tailq 4: qname:<RTE_REORDER>, tqh_first:(nil), tqh_last:0x100004548
Tailq 5: qname:<RTE_RIB>, tqh_first:(nil), tqh_last:0x100004578
Tailq 6: qname:<RTE_RIB6>, tqh_first:(nil), tqh_last:0x1000045a8
Tailq 7: qname:<RTE_MEMBER>, tqh_first:(nil), tqh_last:0x1000045d8
Tailq 8: qname:<RTE_LPM>, tqh_first:(nil), tqh_last:0x100004608
Tailq 9: qname:<RTE_LPM6>, tqh_first:(nil), tqh_last:0x100004638
Tailq 10: qname:<RTE_KNI>, tqh_first:(nil), tqh_last:0x100004668
Tailq 11: qname:<RTE_EFD>, tqh_first:(nil), tqh_last:0x100004698
Tailq 12: qname:<RTE_DIST_BURST>, tqh_first:(nil), tqh_last:0x1000046c8
Tailq 13: qname:<RTE_DISTRIBUTOR>, tqh_first:(nil), tqh_last:0x1000046f8
Tailq 14: qname:<RTE_ACL>, tqh_first:(nil), tqh_last:0x100004728
Tailq 15: qname:<RTE_HASH>, tqh_first:(nil), tqh_last:0x100004758
Tailq 16: qname:<RTE_FBK_HASH>, tqh_first:(nil), tqh_last:0x100004788
Tailq 17: qname:<RTE_THASH>, tqh_first:(nil), tqh_last:0x1000047b8
Tailq 18: qname:<RTE_MBUF_DYNFIELD>, tqh_first:(nil), tqh_last:0x1000047e8
Tailq 19: qname:<RTE_MBUF_DYNFLAG>, tqh_first:(nil), tqh_last:0x100004818
Tailq 20: qname:<RTE_MEMPOOL>, tqh_first:0x17d0b2fc0, tqh_last:0x1649fd040
Tailq 21: qname:<RTE_MEMPOOL_CALLBACK>, tqh_first:0x1649c5c40, tqh_last:0x1649c5c40
Tailq 22: qname:<RTE_RING>, tqh_first:0x17cf30d80, tqh_last:0x1649fcf80
Tailq 23: qname:<UIO_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x1000048d8
Tailq 24: qname:<VFIO_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x100004908
Tailq 25: qname:<VMBUS_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x100004938
Tailq 26: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 27: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 28: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 29: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 30: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 31: qname:<>, tqh_first:(nil), tqh_last:(nil)
---------- END_TAIL_QUEUES ------------




(For intel-ixgbe)


---------- MEMORY_SEGMENTS -----------
Segment 0-0: IOVA:0x140000000, len:1073741824, virt:0x140000000, socket_id:0, hugepage_sz:1073741824, nchannel:4, nrank:0 fd:20
Segment 2-0: IOVA:0x11c0000000, len:1073741824, virt:0x11c0000000, socket_id:1, hugepage_sz:1073741824, nchannel:4, nrank:0 fd:21
--------- END_MEMORY_SEGMENTS ---------
------------ MEMORY_ZONES -------------
Zone 0: name:<rte_eth_dev_data>, len:0x36840, virt:0x17ffb2500, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 1: name:<rte_mbuf_dyn>, len:0xc0, virt:0x17ffaadc0, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 2: name:<RG_HT_fdir_0000:04:00.1>, len:0x40180, virt:0x17ff64540, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 3: name:<RG_HT_l2_tn_0000:04:00.1>, len:0x580, virt:0x17fce39c0, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 4: name:<rte_pdump_stats>, len:0x400040, virt:0x17f8d1c40, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 5: name:<MP_mb_pool_0>, len:0x182100, virt:0x17f24c980, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 6: name:<RG_MP_mb_pool_0>, len:0x200180, virt:0x17f04c6c0, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 7: name:<MP_mb_pool_0_0>, len:0x18333940, virt:0x166d18d00, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 8: name:<MP_mb_pool_1>, len:0x182100, virt:0x11ffe7df00, socket_id:1, flags:0
physical segments used:
  addr: 0x11c0000000 iova: 0x11c0000000 len: 0x40000000 pagesz: 0x40000000
Zone 9: name:<RG_MP_mb_pool_1>, len:0x200180, virt:0x11ffc7dd00, socket_id:1, flags:0
physical segments used:
  addr: 0x11c0000000 iova: 0x11c0000000 len: 0x40000000 pagesz: 0x40000000
Zone 10: name:<MP_mb_pool_1_0>, len:0x18333940, virt:0x11e794a340, socket_id:1, flags:0
physical segments used:
  addr: 0x11c0000000 iova: 0x11c0000000 len: 0x40000000 pagesz: 0x40000000
Zone 11: name:<eth_p0_q0_tx_ring>, len:0x10000, virt:0x166cd7a00, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 12: name:<eth_p0_q1_tx_ring>, len:0x10000, virt:0x166cc6780, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 13: name:<eth_p0_q2_tx_ring>, len:0x10000, virt:0x166cb5500, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 14: name:<eth_p0_q3_tx_ring>, len:0x10000, virt:0x166ca4280, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 15: name:<eth_p0_q4_tx_ring>, len:0x10000, virt:0x166c93000, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 16: name:<eth_p0_q5_tx_ring>, len:0x10000, virt:0x166c81d80, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 17: name:<eth_p0_q6_tx_ring>, len:0x10000, virt:0x166c70b00, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 18: name:<eth_p0_q7_tx_ring>, len:0x10000, virt:0x166c5f880, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 19: name:<eth_p0_q0_rx_ring>, len:0x10200, virt:0x166c4e180, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 20: name:<eth_p0_q1_rx_ring>, len:0x10200, virt:0x166c3c800, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 21: name:<eth_p0_q2_rx_ring>, len:0x10200, virt:0x166c2ae80, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 22: name:<eth_p0_q3_rx_ring>, len:0x10200, virt:0x166c19500, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 23: name:<eth_p0_q4_rx_ring>, len:0x10200, virt:0x166c07b80, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 24: name:<eth_p0_q5_rx_ring>, len:0x10200, virt:0x166bf6200, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 25: name:<eth_p0_q6_rx_ring>, len:0x10200, virt:0x166be4880, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 26: name:<eth_p0_q7_rx_ring>, len:0x10200, virt:0x166bd2f00, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
Zone 27: name:<RTE_METRICS>, len:0x15040, virt:0x166bbcb40, socket_id:0, flags:0
physical segments used:
  addr: 0x140000000 iova: 0x140000000 len: 0x40000000 pagesz: 0x40000000
---------- END_MEMORY_ZONES -----------
------------- TAIL_QUEUES -------------
Tailq 0: qname:<RTE_FIB>, tqh_first:(nil), tqh_last:0x100004488
Tailq 1: qname:<RTE_FIB6>, tqh_first:(nil), tqh_last:0x1000044b8
Tailq 2: qname:<RTE_IPSEC_SAD>, tqh_first:(nil), tqh_last:0x1000044e8
Tailq 3: qname:<RTE_STACK>, tqh_first:(nil), tqh_last:0x100004518
Tailq 4: qname:<RTE_REORDER>, tqh_first:(nil), tqh_last:0x100004548
Tailq 5: qname:<RTE_RIB>, tqh_first:(nil), tqh_last:0x100004578
Tailq 6: qname:<RTE_RIB6>, tqh_first:(nil), tqh_last:0x1000045a8
Tailq 7: qname:<RTE_MEMBER>, tqh_first:(nil), tqh_last:0x1000045d8
Tailq 8: qname:<RTE_LPM>, tqh_first:(nil), tqh_last:0x100004608
Tailq 9: qname:<RTE_LPM6>, tqh_first:(nil), tqh_last:0x100004638
Tailq 10: qname:<RTE_KNI>, tqh_first:(nil), tqh_last:0x100004668
Tailq 11: qname:<RTE_EFD>, tqh_first:(nil), tqh_last:0x100004698
Tailq 12: qname:<RTE_DIST_BURST>, tqh_first:(nil), tqh_last:0x1000046c8
Tailq 13: qname:<RTE_DISTRIBUTOR>, tqh_first:(nil), tqh_last:0x1000046f8
Tailq 14: qname:<RTE_ACL>, tqh_first:(nil), tqh_last:0x100004728
Tailq 15: qname:<RTE_HASH>, tqh_first:0x17ff64480, tqh_last:0x17fce3900
Tailq 16: qname:<RTE_FBK_HASH>, tqh_first:(nil), tqh_last:0x100004788
Tailq 17: qname:<RTE_THASH>, tqh_first:(nil), tqh_last:0x1000047b8
Tailq 18: qname:<RTE_MBUF_DYNFIELD>, tqh_first:0x17ffaad00, tqh_last:0x17ffaad00
Tailq 19: qname:<RTE_MBUF_DYNFLAG>, tqh_first:(nil), tqh_last:0x100004818
Tailq 20: qname:<RTE_MEMPOOL>, tqh_first:0x17f3ceb00, tqh_last:0x166d18b80
Tailq 21: qname:<RTE_MEMPOOL_CALLBACK>, tqh_first:(nil), tqh_last:0x100004878
Tailq 22: qname:<RTE_RING>, tqh_first:0x17ffa4740, tqh_last:0x166d18ac0
Tailq 23: qname:<UIO_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x1000048d8
Tailq 24: qname:<VFIO_RESOURCE_LIST>, tqh_first:0x17ffe8dc0, tqh_last:0x17ffe8dc0
Tailq 25: qname:<VMBUS_RESOURCE_LIST>, tqh_first:(nil), tqh_last:0x100004938
Tailq 26: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 27: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 28: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 29: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 30: qname:<>, tqh_first:(nil), tqh_last:(nil)
Tailq 31: qname:<>, tqh_first:(nil), tqh_last:(nil)
---------- END_TAIL_QUEUES ------------


​Regards ,
Ariba Ehtesham

________________________________
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Friday, June 10, 2022 7:55 PM
To: Ariba Ehtesham <ariba@dreambigsemi.com>
Cc: reshma.pattan@intel.com <reshma.pattan@intel.com>; users@dpdk.org <users@dpdk.org>; Haider Ali <haider@dreambigsemi.com>
Subject: Re: Unable to run dpdk-dumpcap application

On Fri, 10 Jun 2022 10:52:31 +0000
Ariba Ehtesham <ariba@dreambigsemi.com> wrote:

> Hi All,
> Can anybody guide me I am unable to run dpdk-dumpcap application with dpdk-22.03  ,
>
> I am performing following steps:
>
> First:
>
> $ sudo ./build/app/dpdk-testpmd -l 0-3 -n 4  -a 0000:04:00.1 --proc-type=primary -- -i --rxq=8 --txq=8
>
> Second:
>
> $  sudo ./build/app/dpdk-dumpcap  -i 1 -c 6 -w /tmp/capture.pcapng
>
>
> I am getting following error
> mlx5_net: Cannot attach mlx5 shared data
> mlx5_net: Unable to init PMD global data: No such file or directory
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device 0000:42:00.0 cannot be used
> mlx5_net: Cannot attach mlx5 shared data
> mlx5_net: Unable to init PMD global data: No such file or directory
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device 0000:42:00.1 cannot be used
> mlx5_net: Cannot attach mlx5 shared data
> mlx5_net: Unable to init PMD global data: No such file or directory
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device 0000:43:00.0 cannot be used
> mlx5_net: Cannot attach mlx5 shared data
> mlx5_net: Unable to init PMD global data: No such file or directory
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device 0000:43:00.1 cannot be used
> Capturing on '0000:04:00.1'
> Packets captured: 0 ^C
> Packets received/dropped on interface '0000:04:00.1': 0/0 (0.0)
> EAL: Error: Invalid memory
>
>
> Regards ,
> Ariba Ehtesham
>

Looks like problem with secondary process support in mlx5 driver.
Does dpdk-procinfo work for you?

[-- Attachment #2: Type: text/html, Size: 69056 bytes --]

  reply	other threads:[~2022-06-13  7:13 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-10 10:52 Ariba Ehtesham
2022-06-10 14:55 ` Stephen Hemminger
2022-06-13  6:43   ` Ariba Ehtesham [this message]
2022-06-13  6:45     ` Ariba Ehtesham

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=SN4PR22MB320512FC9FE117DE5AB20B48CDAB9@SN4PR22MB3205.namprd22.prod.outlook.com \
    --to=ariba@dreambigsemi.com \
    --cc=haider@dreambigsemi.com \
    --cc=reshma.pattan@intel.com \
    --cc=stephen@networkplumber.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).