From: Hanumanth Reddy Pothula <hpothula@marvell.com>
To: "Singh, Aman Deep" <aman.deep.singh@intel.com>,
Yuying Zhang <yuying.zhang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
"thomas@monjalon.net" <thomas@monjalon.net>,
Jerin Jacob Kollanukkaran <jerinj@marvell.com>,
Nithin Kumar Dabilpuram <ndabilpuram@marvell.com>
Subject: RE: [EXT] Re: [PATCH v9 1/1] app/testpmd: support mulitiple mbuf pools per Rx queue
Date: Mon, 24 Oct 2022 03:32:45 +0000 [thread overview]
Message-ID: <PH0PR18MB47505D5CADE1B2595079A5FDCB2E9@PH0PR18MB4750.namprd18.prod.outlook.com> (raw)
In-Reply-To: <83a6bd07-e9e4-85a0-55c6-e39a7b62869e@intel.com>
> -----Original Message-----
> From: Singh, Aman Deep <aman.deep.singh@intel.com>
> Sent: Friday, October 21, 2022 9:28 PM
> To: Hanumanth Reddy Pothula <hpothula@marvell.com>; Yuying Zhang
> <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; andrew.rybchenko@oktetlabs.ru; thomas@monjalon.net;
> Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Nithin Kumar Dabilpuram
> <ndabilpuram@marvell.com>
> Subject: [EXT] Re: [PATCH v9 1/1] app/testpmd: support mulitiple mbuf pools per
> Rx queue
>
> External Email
>
> ----------------------------------------------------------------------
>
>
> On 10/17/2022 2:18 PM, Hanumanth Pothula wrote:
> > Some of the HW has support for choosing memory pools based on the
> > packet's size. The pool sort capability allows PMD/NIC to choose a
> > memory pool based on the packet's length.
> >
> > On multiple mempool support enabled, populate mempool array
> > accordingly. Also, print pool name on which packet is received.
> >
> > Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
> > ---
> > app/test-pmd/testpmd.c | 40 ++++++++++++++++++++++++++++------------
> > app/test-pmd/testpmd.h | 3 +++
> > app/test-pmd/util.c | 4 ++--
> > 3 files changed, 33 insertions(+), 14 deletions(-)
> >
> > diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index
> > 5b0f0838dc..1549551640 100644
> > --- a/app/test-pmd/testpmd.c
> > +++ b/app/test-pmd/testpmd.c
> > @@ -2647,10 +2647,16 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp)
> > {
> > union rte_eth_rxseg rx_useg[MAX_SEGS_BUFFER_SPLIT] = {};
> > + struct rte_mempool *rx_mempool[MAX_MEMPOOL] = {};
> > unsigned int i, mp_n;
> > int ret;
> >
> > - if (rx_pkt_nb_segs <= 1 ||
> > + /* For multiple mempools per Rx queue support,
> > + * rx_pkt_nb_segs greater than 1 and
> > + * Rx offload flag, RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT won't be set.
> > + * @see rte_eth_rxconf::rx_mempools
>
> I have a basic question about the feature, do we need rx_pkt_nb_segs > 1 for
> feature to work. My understanding is, if multiple mempools are defined the
> driver will move pkts according to its size, even without split of pkts.
> Just for my understanding, Thanks :)
>
Thanks Aman for the review.
Yes, rx_pkt_nb_segs > 1 not required for the multi-mempool feature.
rx_pkt_nb_segs points to number of segments. Need to use mbuf_data_size_n, total number of mbuf mempools, instead. Will take care this and upload new patch-set.
> > + */
> > + if (rx_pkt_nb_segs <= 1 &&
> > (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) == 0) {
> > rx_conf->rx_seg = NULL;
> > rx_conf->rx_nseg = 0;
> > @@ -2668,20 +2674,30 @@ rx_queue_setup(uint16_t port_id, uint16_t
> rx_queue_id,
> > */
> > mp_n = (i >= mbuf_data_size_n) ? mbuf_data_size_n - 1 : i;
> > mpx = mbuf_pool_find(socket_id, mp_n);
> > - /* Handle zero as mbuf data buffer size. */
> > - rx_seg->offset = i < rx_pkt_nb_offs ?
> > - rx_pkt_seg_offsets[i] : 0;
> > - rx_seg->mp = mpx ? mpx : mp;
> > - if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i] == 0) {
> > - rx_seg->proto_hdr = rx_pkt_hdr_protos[i];
> > +
> > + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)
> {
> > + /* Handle zero as mbuf data buffer size. */
> > + rx_seg->offset = i < rx_pkt_nb_offs ?
> > + rx_pkt_seg_offsets[i] : 0;
> > + rx_seg->mp = mpx ? mpx : mp;
> > + if (rx_pkt_hdr_protos[i] != 0 && rx_pkt_seg_lengths[i]
> == 0) {
> > + rx_seg->proto_hdr = rx_pkt_hdr_protos[i];
> > + } else {
> > + rx_seg->length = rx_pkt_seg_lengths[i] ?
> > + rx_pkt_seg_lengths[i] :
> > + mbuf_data_size[mp_n];
> > + }
> > } else {
> > - rx_seg->length = rx_pkt_seg_lengths[i] ?
> > - rx_pkt_seg_lengths[i] :
> > - mbuf_data_size[mp_n];
> > + rx_mempool[i] = mpx ? mpx : mp;
> > }
> > }
> > - rx_conf->rx_nseg = rx_pkt_nb_segs;
> > - rx_conf->rx_seg = rx_useg;
> > + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> > + rx_conf->rx_nseg = rx_pkt_nb_segs;
> > + rx_conf->rx_seg = rx_useg;
> > + } else {
> > + rx_conf->rx_mempools = rx_mempool;
> > + rx_conf->rx_nmempool = rx_pkt_nb_segs;
> > + }
> > ret = rte_eth_rx_queue_setup(port_id, rx_queue_id, nb_rx_desc,
> > socket_id, rx_conf, NULL);
> > rx_conf->rx_seg = NULL;
> > diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index
> > e65be323b8..14be10dcef 100644
> > --- a/app/test-pmd/testpmd.h
> > +++ b/app/test-pmd/testpmd.h
> > @@ -80,6 +80,9 @@ extern uint8_t cl_quit;
> >
> > #define MIN_TOTAL_NUM_MBUFS 1024
> >
> > +/* Maximum number of pools supported per Rx queue */ #define
> > +MAX_MEMPOOL 8
>
> Shoud we set it to MAX_SEGS_BUFFER_SPLIT to avoid mismatch.
>
> > +
> > typedef uint8_t lcoreid_t;
> > typedef uint16_t portid_t;
> > typedef uint16_t queueid_t;
> > diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c index
> > fd98e8b51d..f9df5f69ef 100644
> > --- a/app/test-pmd/util.c
> > +++ b/app/test-pmd/util.c
> > @@ -150,8 +150,8 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue,
> struct rte_mbuf *pkts[],
> > print_ether_addr(" - dst=", ð_hdr->dst_addr,
> > print_buf, buf_size, &cur_len);
> > MKDUMPSTR(print_buf, buf_size, cur_len,
> > - " - type=0x%04x - length=%u - nb_segs=%d",
> > - eth_type, (unsigned int) mb->pkt_len,
> > + " - pool=%s - type=0x%04x - length=%u -
> nb_segs=%d",
> > + mb->pool->name, eth_type, (unsigned int) mb-
> >pkt_len,
> > (int)mb->nb_segs);
> > ol_flags = mb->ol_flags;
> > if (ol_flags & RTE_MBUF_F_RX_RSS_HASH) {
next prev parent reply other threads:[~2022-10-24 3:32 UTC|newest]
Thread overview: 75+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-12 10:46 [PATCH v1 1/1] ethdev: introduce pool sort capability Hanumanth Pothula
2022-08-12 13:27 ` Morten Brørup
2022-08-12 17:24 ` [PATCH v2 1/3] " Hanumanth Pothula
2022-08-12 17:24 ` [PATCH v2 2/3] app/testpmd: add command line argument 'rxseg-mode' Hanumanth Pothula
2022-08-12 17:24 ` [PATCH v2 3/3] net/cnxk: introduce pool sort capability Hanumanth Pothula
2022-08-23 3:26 ` [PATCH v2 1/3] ethdev: " Ding, Xuan
2022-08-24 15:33 ` Ferruh Yigit
2022-08-30 12:08 ` [EXT] " Hanumanth Reddy Pothula
2022-09-06 12:18 ` Ferruh Yigit
2022-09-07 7:02 ` Hanumanth Reddy Pothula
2022-09-07 11:24 ` Ferruh Yigit
2022-09-07 21:31 ` Hanumanth Reddy Pothula
2022-09-13 9:28 ` Ferruh Yigit
2022-09-13 10:00 ` Hanumanth Reddy Pothula
2022-09-02 7:00 ` [PATCH v3 " Hanumanth Pothula
2022-09-02 7:00 ` [PATCH v3 2/3] app/testpmd: Add support for " Hanumanth Pothula
2022-09-02 7:00 ` [PATCH v3 3/3] net/cnxk: introduce " Hanumanth Pothula
2022-09-13 8:06 ` [PATCH v3 1/3] ethdev: " Andrew Rybchenko
2022-09-13 9:31 ` Ferruh Yigit
2022-09-13 10:41 ` [EXT] " Hanumanth Reddy Pothula
2022-09-15 7:07 ` [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue Hanumanth Pothula
2022-09-15 7:07 ` [PATCH v4 2/3] app/testpmd: " Hanumanth Pothula
2022-09-15 7:07 ` [PATCH v4 3/3] net/cnxk: Add support for mulitiple mbuf pools Hanumanth Pothula
2022-09-28 9:43 ` [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue Andrew Rybchenko
2022-09-28 11:06 ` Thomas Monjalon
2022-10-06 17:01 ` [PATCH v5 1/3] ethdev: support " Hanumanth Pothula
2022-10-06 17:01 ` [PATCH v5 2/3] net/cnxk: " Hanumanth Pothula
2022-10-06 17:01 ` [PATCH v5 3/3] app/testpmd: " Hanumanth Pothula
2022-10-06 17:29 ` [PATCH v5 1/3] ethdev: " Stephen Hemminger
2022-10-07 14:13 ` Andrew Rybchenko
2022-10-06 17:53 ` [PATCH v6 " Hanumanth Pothula
2022-10-06 17:53 ` [PATCH v6 2/3] net/cnxk: " Hanumanth Pothula
2022-10-06 17:53 ` [PATCH v6 3/3] app/testpmd: " Hanumanth Pothula
2022-10-06 18:14 ` [PATCH v6 1/3] ethdev: " Hanumanth Reddy Pothula
2022-10-07 14:37 ` [PATCH v7 0/4] " Andrew Rybchenko
2022-10-07 14:37 ` [PATCH v7 1/4] ethdev: factor out helper function to check Rx mempool Andrew Rybchenko
2022-10-07 14:37 ` [PATCH v7 2/4] ethdev: support mulitiple mbuf pools per Rx queue Andrew Rybchenko
2022-10-07 16:08 ` Thomas Monjalon
2022-10-07 16:18 ` Stephen Hemminger
2022-10-07 16:20 ` Stephen Hemminger
2022-10-07 16:33 ` Andrew Rybchenko
2022-10-07 17:30 ` Andrew Rybchenko
2022-10-07 14:37 ` [PATCH v7 3/4] net/cnxk: " Andrew Rybchenko
2022-10-07 14:37 ` [PATCH v7 4/4] app/testpmd: " Andrew Rybchenko
2022-10-07 17:29 ` [PATCH v8 0/4] ethdev: " Andrew Rybchenko
2022-10-07 17:29 ` [PATCH v8 1/4] ethdev: factor out helper function to check Rx mempool Andrew Rybchenko
2022-10-07 17:29 ` [PATCH v8 2/4] ethdev: support multiple mbuf pools per Rx queue Andrew Rybchenko
2022-10-07 18:35 ` Thomas Monjalon
2022-10-07 19:45 ` Andrew Rybchenko
2022-10-07 17:29 ` [PATCH v8 3/4] net/cnxk: support mulitiple " Andrew Rybchenko
2022-10-07 17:29 ` [PATCH v8 4/4] app/testpmd: " Andrew Rybchenko
[not found] ` <PH0PR18MB47500560DC1793F68E7312DDCB5F9@PH0PR18MB4750.namprd18.prod.outlook.com>
2022-10-07 19:43 ` [EXT] " Andrew Rybchenko
2022-10-07 19:56 ` Hanumanth Reddy Pothula
2022-10-17 8:48 ` [PATCH v9 1/1] " Hanumanth Pothula
2022-10-21 15:57 ` Singh, Aman Deep
2022-10-24 3:32 ` Hanumanth Reddy Pothula [this message]
2022-10-24 4:07 ` [PATCH v10 1/1] app/testpmd: support multiple " Hanumanth Pothula
2022-10-25 1:40 ` [PATCH v11 " Hanumanth Pothula
2022-11-01 14:13 ` Hanumanth Reddy Pothula
2022-11-03 12:15 ` Singh, Aman Deep
2022-11-03 12:36 ` [EXT] " Hanumanth Reddy Pothula
2022-11-03 15:20 ` Singh, Aman Deep
2022-11-04 15:38 ` Hanumanth Reddy Pothula
2022-11-07 5:31 ` [PATCH v12 " Hanumanth Pothula
2022-11-09 8:04 ` Singh, Aman Deep
2022-11-09 10:39 ` Andrew Rybchenko
2022-11-10 6:51 ` Andrew Rybchenko
2022-11-10 8:17 ` [PATCH v13 " Hanumanth Pothula
2022-11-10 9:01 ` Andrew Rybchenko
2022-11-10 9:31 ` [EXT] " Hanumanth Reddy Pothula
2022-11-10 10:16 ` [PATCH v14 " Hanumanth Pothula
2022-11-10 10:47 ` Andrew Rybchenko
2022-11-17 8:43 ` Jiang, YuX
2022-11-17 11:38 ` Hanumanth Reddy Pothula
2022-10-08 20:38 ` [PATCH v8 0/4] ethdev: support mulitiple " Thomas Monjalon
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH0PR18MB47505D5CADE1B2595079A5FDCB2E9@PH0PR18MB4750.namprd18.prod.outlook.com \
--to=hpothula@marvell.com \
--cc=aman.deep.singh@intel.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=jerinj@marvell.com \
--cc=ndabilpuram@marvell.com \
--cc=thomas@monjalon.net \
--cc=yuying.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).