DPDK patches and discussions
 help / color / mirror / Atom feed
From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
To: Hanumanth Pothula <hpothula@marvell.com>,
	Thomas Monjalon <thomas@monjalon.net>,
	Ferruh Yigit <ferruh.yigit@xilinx.com>
Cc: dev@dpdk.org, xuan.ding@intel.com, wenxuanx.wu@intel.com,
	xiaoyun.li@intel.com, stephen@networkplumber.org,
	yuanx.wang@intel.com, mdr@ashroe.eu, yuying.zhang@intel.com,
	qi.z.zhang@intel.com, viacheslavo@nvidia.com, jerinj@marvell.com,
	ndabilpuram@marvell.com,
	"Morten Brørup" <mb@smartsharesystems.com>
Subject: Re: [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue
Date: Wed, 28 Sep 2022 12:43:29 +0300	[thread overview]
Message-ID: <961d9652-88e8-2a4d-8c92-8a9440834a49@oktetlabs.ru> (raw)
In-Reply-To: <20220915070732.182542-1-hpothula@marvell.com>

"Add support for" -> "add support for" or just "support" if
line is long

On 9/15/22 10:07, Hanumanth Pothula wrote:
> This patch adds support for multiple mempool capability.

"This patch adds" -> "Add"

> Some of the HW has support for choosing memory pools based on the
> packet's size. Thiscapability allows PMD to choose a memory pool

Thiscapability -> The capability

> based on the packet's length.
> 
> This is often useful for saving the memory where the application
> can create a different pool to steer the specific size of the
> packet, thus enabling effective use of memory.
> 
> For example, let's say HW has a capability of three pools,
>   - pool-1 size is 2K
>   - pool-2 size is > 2K and < 4K
>   - pool-3 size is > 4K
> Here,
>          pool-1 can accommodate packets with sizes < 2K
>          pool-2 can accommodate packets with sizes > 2K and < 4K
>          pool-3 can accommodate packets with sizes > 4K
> 
> With multiple mempool capability enabled in SW, an application may
> create three pools of different sizes and send them to PMD. Allowing
> PMD to program HW based on the packet lengths. So that packets with
> less than 2K are received on pool-1, packets with lengths between 2K
> and 4K are received on pool-2 and finally packets greater than 4K
> are received on pool-3.
> 
> Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>

Please, advertise the new feature in release notes.

[snip]

> diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
> index 1979dc0850..8618d6b01d 100644
> --- a/lib/ethdev/rte_ethdev.c
> +++ b/lib/ethdev/rte_ethdev.c
> @@ -1634,6 +1634,45 @@ rte_eth_dev_is_removed(uint16_t port_id)
>   	return ret;
>   }
>   
> +static int
> +rte_eth_rx_queue_check_mempool(const struct rte_eth_rx_mempool *rx_mempool,
> +			       uint16_t n_pool, uint32_t *mbp_buf_size,
> +			       const struct rte_eth_dev_info *dev_info)
> +{
> +	uint16_t pool_idx;
> +
> +	if (n_pool > dev_info->max_pools) {
> +		RTE_ETHDEV_LOG(ERR,
> +			       "Invalid capabilities, max pools supported %u\n",

"Invalid capabilities" sounds misleading. Consider something
like:

"Too many Rx mempools %u vs maximum %u\n", n_pool, dev_info->max_pools

> +			       dev_info->max_pools);
> +		return -EINVAL;
> +	}
> +
> +	for (pool_idx = 0; pool_idx < n_pool; pool_idx++) {
> +		struct rte_mempool *mpl = rx_mempool[pool_idx].mp;
> +
> +		if (mpl == NULL) {
> +			RTE_ETHDEV_LOG(ERR, "null mempool pointer\n");

"null Rx mempool pointer\n"

> +			return -EINVAL;
> +		}
> +
> +		*mbp_buf_size = rte_pktmbuf_data_room_size(mpl);
> +		if (*mbp_buf_size < dev_info->min_rx_bufsize +
> +		    RTE_PKTMBUF_HEADROOM) {
> +			RTE_ETHDEV_LOG(ERR,
> +				       "%s mbuf_data_room_size %u < %u (RTE_PKTMBUF_HEADROOM=%u + min_rx_bufsize(dev)=%u)\n",
> +					mpl->name, *mbp_buf_size,
> +					RTE_PKTMBUF_HEADROOM + dev_info->min_rx_bufsize,
> +					RTE_PKTMBUF_HEADROOM,
> +					dev_info->min_rx_bufsize);
> +			return -EINVAL;
> +		}
> +

Please, remove extra empty line

> +	}

If Rx scatter is disabled, at least one mempool must be
sufficient for up to MTU packets.

> +
> +	return 0;
> +}
> +
>   static int
>   rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg,
>   			     uint16_t n_seg, uint32_t *mbp_buf_size,
> @@ -1733,7 +1772,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,
>   
>   	if (mp != NULL) {
>   		/* Single pool configuration check. */
> -		if (rx_conf != NULL && rx_conf->rx_nseg != 0) {
> +		if (rx_conf != NULL &&
> +		    (rx_conf->rx_nseg != 0 ||  rx_conf->rx_npool)) {

rx_conf->rx_npool != 0 (as DPDK coding style says)

If mp is not NULL, it should be checked that neither buffer
split nor multiple mempool offloads are enabled.
Moreover, I think is a bug in a buffer split which
requires separate pre-patch. Check for rx_nsegs is 0 is
not required in fact since the offload flag must be used.

>   			RTE_ETHDEV_LOG(ERR,
>   				       "Ambiguous segment configuration\n");

segment -> Rx mempools

>   			return -EINVAL;
> @@ -1763,30 +1803,42 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id,

>   				       dev_info.min_rx_bufsize);
>   			return -EINVAL;
>   		}
> -	} else {
> -		const struct rte_eth_rxseg_split *rx_seg;
> -		uint16_t n_seg;
> +	} else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT ||
> +		  rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) {

May be:
(rx_conf->offloads & (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT |
                       RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) != 0)
However, I'd split this branches to have more clear checks.
If we do not support both buffer split and multi-mempool
simultaneously - it must be checked. Just double check
that another offload is not requested.

>   
> -		/* Extended multi-segment configuration check. */
> -		if (rx_conf == NULL || rx_conf->rx_seg == NULL || rx_conf->rx_nseg == 0) {
> +		/* Extended multi-segment/pool configuration check. */
> +		if (rx_conf == NULL ||
> +		    (rx_conf->rx_seg == NULL && rx_conf->rx_mempool == NULL) ||
> +		    (rx_conf->rx_nseg == 0 && rx_conf->rx_npool == 0)) {

IMHO such generalized checks are wrong. We must check for
corresponding offload flag first.

>   			RTE_ETHDEV_LOG(ERR,
>   				       "Memory pool is null and no extended configuration provided\n");
>   			return -EINVAL;
>   		}
>   
> -		rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg;
> -		n_seg = rx_conf->rx_nseg;
> -
>   		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) {
> +			const struct rte_eth_rxseg_split *rx_seg =
> +				(const struct rte_eth_rxseg_split *)rx_conf->rx_seg;
> +			uint16_t n_seg = rx_conf->rx_nseg;
>   			ret = rte_eth_rx_queue_check_split(rx_seg, n_seg,
>   							   &mbp_buf_size,
>   							   &dev_info);
> -			if (ret != 0)
> +			if (ret)

Integers must be checked vs 0 explicitly in DPDK coding style.
Also the change looks unrelated.

>   				return ret;
> -		} else {
> -			RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n");
> -			return -EINVAL;
>   		}
> +		if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) {
> +			const struct rte_eth_rx_mempool *rx_mempool =
> +				(const struct rte_eth_rx_mempool *)rx_conf->rx_mempool;
> +			ret = rte_eth_rx_queue_check_mempool(rx_mempool,
> +							     rx_conf->rx_npool,
> +							     &mbp_buf_size,
> +							     &dev_info);
> +			if (ret)
> +				return ret;
> +
> +		}
> +	} else {
> +		RTE_ETHDEV_LOG(ERR, "No Rx offload is configured\n");

THe log message is misleading. Consider:
"Missing Rx mempool configuration\n"

> +		return -EINVAL;
>   	}
>   
>   	/* Use default specified by driver, if nb_rx_desc is zero */
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index b62ac5bb6f..17deec2cbd 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1035,6 +1035,11 @@ union rte_eth_rxseg {
>   	/* The other features settings should be added here. */
>   };
>   
> +/* A common structure used to describe mbuf pools per Rx queue */
> +struct rte_eth_rx_mempool {
> +	struct rte_mempool *mp;
> +};

Why do we need it? Can we use below just
    struct rte_mempool *rx_mempools;

> +
>   /**
>    * A structure used to configure an Rx ring of an Ethernet port.
>    */
> @@ -1067,6 +1072,23 @@ struct rte_eth_rxconf {
>   	 */
>   	union rte_eth_rxseg *rx_seg;
>   
> +	/**
> +	 * Points to an array of mempools.
> +	 *

It should be highlighted that drivers should take a look at it
if and only if corresponding offload is enabled for the Rx
queue.

> +	 * This provides support for  multiple mbuf pools per Rx queue.
> +	 *
> +	 * This is often useful for saving the memory where the application can
> +	 * create a different pools to steer the specific size of the packet, thus
> +	 * enabling effective use of memory.
> +	 *
> +	 * Note that on Rx scatter enable, a packet may be delivered using a chain
> +	 * of mbufs obtained from single mempool or multiple mempools based on
> +	 * the NIC implementation.
> +	 *

Remove extra empty line above.

> +	 */
> +	struct rte_eth_rx_mempool *rx_mempool;
> +	uint16_t rx_npool; /** < number of mempools */
> +
>   	uint64_t reserved_64s[2]; /**< Reserved for future fields */
>   	void *reserved_ptrs[2];   /**< Reserved for future fields */
>   };

[snip]

> @@ -1615,6 +1638,7 @@ struct rte_eth_dev_info {
>   	/** Configured number of Rx/Tx queues */
>   	uint16_t nb_rx_queues; /**< Number of Rx queues. */
>   	uint16_t nb_tx_queues; /**< Number of Tx queues. */
> +	uint16_t max_pools;

Description of the new member is missing. Please, add it.

>   	/** Rx parameter recommendations */
>   	struct rte_eth_dev_portconf default_rxportconf;
>   	/** Tx parameter recommendations */


  parent reply	other threads:[~2022-09-28  9:43 UTC|newest]

Thread overview: 75+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-12 10:46 [PATCH v1 1/1] ethdev: introduce pool sort capability Hanumanth Pothula
2022-08-12 13:27 ` Morten Brørup
2022-08-12 17:24 ` [PATCH v2 1/3] " Hanumanth Pothula
2022-08-12 17:24   ` [PATCH v2 2/3] app/testpmd: add command line argument 'rxseg-mode' Hanumanth Pothula
2022-08-12 17:24   ` [PATCH v2 3/3] net/cnxk: introduce pool sort capability Hanumanth Pothula
2022-08-23  3:26   ` [PATCH v2 1/3] ethdev: " Ding, Xuan
2022-08-24 15:33     ` Ferruh Yigit
2022-08-30 12:08       ` [EXT] " Hanumanth Reddy Pothula
2022-09-06 12:18         ` Ferruh Yigit
2022-09-07  7:02           ` Hanumanth Reddy Pothula
2022-09-07 11:24             ` Ferruh Yigit
2022-09-07 21:31               ` Hanumanth Reddy Pothula
2022-09-13  9:28                 ` Ferruh Yigit
2022-09-13 10:00                   ` Hanumanth Reddy Pothula
2022-09-02  7:00   ` [PATCH v3 " Hanumanth Pothula
2022-09-02  7:00     ` [PATCH v3 2/3] app/testpmd: Add support for " Hanumanth Pothula
2022-09-02  7:00     ` [PATCH v3 3/3] net/cnxk: introduce " Hanumanth Pothula
2022-09-13  8:06     ` [PATCH v3 1/3] ethdev: " Andrew Rybchenko
2022-09-13  9:31       ` Ferruh Yigit
2022-09-13 10:41         ` [EXT] " Hanumanth Reddy Pothula
2022-09-15  7:07     ` [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue Hanumanth Pothula
2022-09-15  7:07       ` [PATCH v4 2/3] app/testpmd: " Hanumanth Pothula
2022-09-15  7:07       ` [PATCH v4 3/3] net/cnxk: Add support for mulitiple mbuf pools Hanumanth Pothula
2022-09-28  9:43       ` Andrew Rybchenko [this message]
2022-09-28 11:06         ` [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue Thomas Monjalon
2022-10-06 17:01       ` [PATCH v5 1/3] ethdev: support " Hanumanth Pothula
2022-10-06 17:01         ` [PATCH v5 2/3] net/cnxk: " Hanumanth Pothula
2022-10-06 17:01         ` [PATCH v5 3/3] app/testpmd: " Hanumanth Pothula
2022-10-06 17:29         ` [PATCH v5 1/3] ethdev: " Stephen Hemminger
2022-10-07 14:13           ` Andrew Rybchenko
2022-10-06 17:53         ` [PATCH v6 " Hanumanth Pothula
2022-10-06 17:53           ` [PATCH v6 2/3] net/cnxk: " Hanumanth Pothula
2022-10-06 17:53           ` [PATCH v6 3/3] app/testpmd: " Hanumanth Pothula
2022-10-06 18:14           ` [PATCH v6 1/3] ethdev: " Hanumanth Reddy Pothula
2022-10-07 14:37         ` [PATCH v7 0/4] " Andrew Rybchenko
2022-10-07 14:37           ` [PATCH v7 1/4] ethdev: factor out helper function to check Rx mempool Andrew Rybchenko
2022-10-07 14:37           ` [PATCH v7 2/4] ethdev: support mulitiple mbuf pools per Rx queue Andrew Rybchenko
2022-10-07 16:08             ` Thomas Monjalon
2022-10-07 16:18               ` Stephen Hemminger
2022-10-07 16:20                 ` Stephen Hemminger
2022-10-07 16:33                   ` Andrew Rybchenko
2022-10-07 17:30               ` Andrew Rybchenko
2022-10-07 14:37           ` [PATCH v7 3/4] net/cnxk: " Andrew Rybchenko
2022-10-07 14:37           ` [PATCH v7 4/4] app/testpmd: " Andrew Rybchenko
2022-10-07 17:29         ` [PATCH v8 0/4] ethdev: " Andrew Rybchenko
2022-10-07 17:29           ` [PATCH v8 1/4] ethdev: factor out helper function to check Rx mempool Andrew Rybchenko
2022-10-07 17:29           ` [PATCH v8 2/4] ethdev: support multiple mbuf pools per Rx queue Andrew Rybchenko
2022-10-07 18:35             ` Thomas Monjalon
2022-10-07 19:45               ` Andrew Rybchenko
2022-10-07 17:29           ` [PATCH v8 3/4] net/cnxk: support mulitiple " Andrew Rybchenko
2022-10-07 17:29           ` [PATCH v8 4/4] app/testpmd: " Andrew Rybchenko
     [not found]             ` <PH0PR18MB47500560DC1793F68E7312DDCB5F9@PH0PR18MB4750.namprd18.prod.outlook.com>
2022-10-07 19:43               ` [EXT] " Andrew Rybchenko
2022-10-07 19:56                 ` Hanumanth Reddy Pothula
2022-10-17  8:48             ` [PATCH v9 1/1] " Hanumanth Pothula
2022-10-21 15:57               ` Singh, Aman Deep
2022-10-24  3:32                 ` [EXT] " Hanumanth Reddy Pothula
2022-10-24  4:07               ` [PATCH v10 1/1] app/testpmd: support multiple " Hanumanth Pothula
2022-10-25  1:40                 ` [PATCH v11 " Hanumanth Pothula
2022-11-01 14:13                   ` Hanumanth Reddy Pothula
2022-11-03 12:15                   ` Singh, Aman Deep
2022-11-03 12:36                     ` [EXT] " Hanumanth Reddy Pothula
2022-11-03 15:20                       ` Singh, Aman Deep
2022-11-04 15:38                         ` Hanumanth Reddy Pothula
2022-11-07  5:31                   ` [PATCH v12 " Hanumanth Pothula
2022-11-09  8:04                     ` Singh, Aman Deep
2022-11-09 10:39                       ` Andrew Rybchenko
2022-11-10  6:51                         ` Andrew Rybchenko
2022-11-10  8:17                     ` [PATCH v13 " Hanumanth Pothula
2022-11-10  9:01                       ` Andrew Rybchenko
2022-11-10  9:31                         ` [EXT] " Hanumanth Reddy Pothula
2022-11-10 10:16                       ` [PATCH v14 " Hanumanth Pothula
2022-11-10 10:47                         ` Andrew Rybchenko
2022-11-17  8:43                         ` Jiang, YuX
2022-11-17 11:38                           ` Hanumanth Reddy Pothula
2022-10-08 20:38           ` [PATCH v8 0/4] ethdev: support mulitiple " Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=961d9652-88e8-2a4d-8c92-8a9440834a49@oktetlabs.ru \
    --to=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@xilinx.com \
    --cc=hpothula@marvell.com \
    --cc=jerinj@marvell.com \
    --cc=mb@smartsharesystems.com \
    --cc=mdr@ashroe.eu \
    --cc=ndabilpuram@marvell.com \
    --cc=qi.z.zhang@intel.com \
    --cc=stephen@networkplumber.org \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    --cc=wenxuanx.wu@intel.com \
    --cc=xiaoyun.li@intel.com \
    --cc=xuan.ding@intel.com \
    --cc=yuanx.wang@intel.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).