From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9884AA00C2; Wed, 28 Sep 2022 11:43:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 77A354113D; Wed, 28 Sep 2022 11:43:31 +0200 (CEST) Received: from shelob.oktetlabs.ru (shelob.oktetlabs.ru [91.220.146.113]) by mails.dpdk.org (Postfix) with ESMTP id A992B4113C for ; Wed, 28 Sep 2022 11:43:30 +0200 (CEST) Received: from [192.168.38.17] (aros.oktetlabs.ru [192.168.38.17]) (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits) key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256) (No client certificate requested) by shelob.oktetlabs.ru (Postfix) with ESMTPSA id 223E162; Wed, 28 Sep 2022 12:43:30 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 shelob.oktetlabs.ru 223E162 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=oktetlabs.ru; s=default; t=1664358210; bh=HAZzY+qu4uxtphVcx/2VJoc66dG5gM7MM4ycEY0pmCk=; h=Date:Subject:To:Cc:References:From:In-Reply-To:From; b=axOAxpeJ2b3xltyL3eKAB1kLedBXX15dnPL32m0uT0tIN1oTZ4L86alx4Xw7sXOm3 3qPkDlRijde6kHcoLAzOPpaMikyPmTM7nfvsJ6oik0oXuFsUEqH+gZkARpRQMaHu+6 Tqr6WsLf5HsQz2VYHhX3Sup8mQemsW7z0iXh/0XU= Message-ID: <961d9652-88e8-2a4d-8c92-8a9440834a49@oktetlabs.ru> Date: Wed, 28 Sep 2022 12:43:29 +0300 MIME-Version: 1.0 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:91.0) Gecko/20100101 Thunderbird/91.13.0 Subject: Re: [PATCH v4 1/3] ethdev: Add support for mulitiple mbuf pools per Rx queue Content-Language: en-US To: Hanumanth Pothula , Thomas Monjalon , Ferruh Yigit Cc: dev@dpdk.org, xuan.ding@intel.com, wenxuanx.wu@intel.com, xiaoyun.li@intel.com, stephen@networkplumber.org, yuanx.wang@intel.com, mdr@ashroe.eu, yuying.zhang@intel.com, qi.z.zhang@intel.com, viacheslavo@nvidia.com, jerinj@marvell.com, ndabilpuram@marvell.com, =?UTF-8?Q?Morten_Br=c3=b8rup?= References: <20220902070047.2812906-1-hpothula@marvell.com> <20220915070732.182542-1-hpothula@marvell.com> From: Andrew Rybchenko Organization: OKTET Labs In-Reply-To: <20220915070732.182542-1-hpothula@marvell.com> Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org "Add support for" -> "add support for" or just "support" if line is long On 9/15/22 10:07, Hanumanth Pothula wrote: > This patch adds support for multiple mempool capability. "This patch adds" -> "Add" > Some of the HW has support for choosing memory pools based on the > packet's size. Thiscapability allows PMD to choose a memory pool Thiscapability -> The capability > based on the packet's length. > > This is often useful for saving the memory where the application > can create a different pool to steer the specific size of the > packet, thus enabling effective use of memory. > > For example, let's say HW has a capability of three pools, > - pool-1 size is 2K > - pool-2 size is > 2K and < 4K > - pool-3 size is > 4K > Here, > pool-1 can accommodate packets with sizes < 2K > pool-2 can accommodate packets with sizes > 2K and < 4K > pool-3 can accommodate packets with sizes > 4K > > With multiple mempool capability enabled in SW, an application may > create three pools of different sizes and send them to PMD. Allowing > PMD to program HW based on the packet lengths. So that packets with > less than 2K are received on pool-1, packets with lengths between 2K > and 4K are received on pool-2 and finally packets greater than 4K > are received on pool-3. > > Signed-off-by: Hanumanth Pothula Please, advertise the new feature in release notes. [snip] > diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c > index 1979dc0850..8618d6b01d 100644 > --- a/lib/ethdev/rte_ethdev.c > +++ b/lib/ethdev/rte_ethdev.c > @@ -1634,6 +1634,45 @@ rte_eth_dev_is_removed(uint16_t port_id) > return ret; > } > > +static int > +rte_eth_rx_queue_check_mempool(const struct rte_eth_rx_mempool *rx_mempool, > + uint16_t n_pool, uint32_t *mbp_buf_size, > + const struct rte_eth_dev_info *dev_info) > +{ > + uint16_t pool_idx; > + > + if (n_pool > dev_info->max_pools) { > + RTE_ETHDEV_LOG(ERR, > + "Invalid capabilities, max pools supported %u\n", "Invalid capabilities" sounds misleading. Consider something like: "Too many Rx mempools %u vs maximum %u\n", n_pool, dev_info->max_pools > + dev_info->max_pools); > + return -EINVAL; > + } > + > + for (pool_idx = 0; pool_idx < n_pool; pool_idx++) { > + struct rte_mempool *mpl = rx_mempool[pool_idx].mp; > + > + if (mpl == NULL) { > + RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); "null Rx mempool pointer\n" > + return -EINVAL; > + } > + > + *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); > + if (*mbp_buf_size < dev_info->min_rx_bufsize + > + RTE_PKTMBUF_HEADROOM) { > + RTE_ETHDEV_LOG(ERR, > + "%s mbuf_data_room_size %u < %u (RTE_PKTMBUF_HEADROOM=%u + min_rx_bufsize(dev)=%u)\n", > + mpl->name, *mbp_buf_size, > + RTE_PKTMBUF_HEADROOM + dev_info->min_rx_bufsize, > + RTE_PKTMBUF_HEADROOM, > + dev_info->min_rx_bufsize); > + return -EINVAL; > + } > + Please, remove extra empty line > + } If Rx scatter is disabled, at least one mempool must be sufficient for up to MTU packets. > + > + return 0; > +} > + > static int > rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, > uint16_t n_seg, uint32_t *mbp_buf_size, > @@ -1733,7 +1772,8 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, > > if (mp != NULL) { > /* Single pool configuration check. */ > - if (rx_conf != NULL && rx_conf->rx_nseg != 0) { > + if (rx_conf != NULL && > + (rx_conf->rx_nseg != 0 || rx_conf->rx_npool)) { rx_conf->rx_npool != 0 (as DPDK coding style says) If mp is not NULL, it should be checked that neither buffer split nor multiple mempool offloads are enabled. Moreover, I think is a bug in a buffer split which requires separate pre-patch. Check for rx_nsegs is 0 is not required in fact since the offload flag must be used. > RTE_ETHDEV_LOG(ERR, > "Ambiguous segment configuration\n"); segment -> Rx mempools > return -EINVAL; > @@ -1763,30 +1803,42 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, > dev_info.min_rx_bufsize); > return -EINVAL; > } > - } else { > - const struct rte_eth_rxseg_split *rx_seg; > - uint16_t n_seg; > + } else if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT || > + rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { May be: (rx_conf->offloads & (RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT | RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) != 0) However, I'd split this branches to have more clear checks. If we do not support both buffer split and multi-mempool simultaneously - it must be checked. Just double check that another offload is not requested. > > - /* Extended multi-segment configuration check. */ > - if (rx_conf == NULL || rx_conf->rx_seg == NULL || rx_conf->rx_nseg == 0) { > + /* Extended multi-segment/pool configuration check. */ > + if (rx_conf == NULL || > + (rx_conf->rx_seg == NULL && rx_conf->rx_mempool == NULL) || > + (rx_conf->rx_nseg == 0 && rx_conf->rx_npool == 0)) { IMHO such generalized checks are wrong. We must check for corresponding offload flag first. > RTE_ETHDEV_LOG(ERR, > "Memory pool is null and no extended configuration provided\n"); > return -EINVAL; > } > > - rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; > - n_seg = rx_conf->rx_nseg; > - > if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { > + const struct rte_eth_rxseg_split *rx_seg = > + (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; > + uint16_t n_seg = rx_conf->rx_nseg; > ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, > &mbp_buf_size, > &dev_info); > - if (ret != 0) > + if (ret) Integers must be checked vs 0 explicitly in DPDK coding style. Also the change looks unrelated. > return ret; > - } else { > - RTE_ETHDEV_LOG(ERR, "No Rx segmentation offload configured\n"); > - return -EINVAL; > } > + if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_MUL_MEMPOOL) { > + const struct rte_eth_rx_mempool *rx_mempool = > + (const struct rte_eth_rx_mempool *)rx_conf->rx_mempool; > + ret = rte_eth_rx_queue_check_mempool(rx_mempool, > + rx_conf->rx_npool, > + &mbp_buf_size, > + &dev_info); > + if (ret) > + return ret; > + > + } > + } else { > + RTE_ETHDEV_LOG(ERR, "No Rx offload is configured\n"); THe log message is misleading. Consider: "Missing Rx mempool configuration\n" > + return -EINVAL; > } > > /* Use default specified by driver, if nb_rx_desc is zero */ > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h > index b62ac5bb6f..17deec2cbd 100644 > --- a/lib/ethdev/rte_ethdev.h > +++ b/lib/ethdev/rte_ethdev.h > @@ -1035,6 +1035,11 @@ union rte_eth_rxseg { > /* The other features settings should be added here. */ > }; > > +/* A common structure used to describe mbuf pools per Rx queue */ > +struct rte_eth_rx_mempool { > + struct rte_mempool *mp; > +}; Why do we need it? Can we use below just struct rte_mempool *rx_mempools; > + > /** > * A structure used to configure an Rx ring of an Ethernet port. > */ > @@ -1067,6 +1072,23 @@ struct rte_eth_rxconf { > */ > union rte_eth_rxseg *rx_seg; > > + /** > + * Points to an array of mempools. > + * It should be highlighted that drivers should take a look at it if and only if corresponding offload is enabled for the Rx queue. > + * This provides support for multiple mbuf pools per Rx queue. > + * > + * This is often useful for saving the memory where the application can > + * create a different pools to steer the specific size of the packet, thus > + * enabling effective use of memory. > + * > + * Note that on Rx scatter enable, a packet may be delivered using a chain > + * of mbufs obtained from single mempool or multiple mempools based on > + * the NIC implementation. > + * Remove extra empty line above. > + */ > + struct rte_eth_rx_mempool *rx_mempool; > + uint16_t rx_npool; /** < number of mempools */ > + > uint64_t reserved_64s[2]; /**< Reserved for future fields */ > void *reserved_ptrs[2]; /**< Reserved for future fields */ > }; [snip] > @@ -1615,6 +1638,7 @@ struct rte_eth_dev_info { > /** Configured number of Rx/Tx queues */ > uint16_t nb_rx_queues; /**< Number of Rx queues. */ > uint16_t nb_tx_queues; /**< Number of Tx queues. */ > + uint16_t max_pools; Description of the new member is missing. Please, add it. > /** Rx parameter recommendations */ > struct rte_eth_dev_portconf default_rxportconf; > /** Tx parameter recommendations */