From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8513A0543; Fri, 12 Aug 2022 12:49:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83C69406A2; Fri, 12 Aug 2022 12:49:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7B6B240685 for ; Fri, 12 Aug 2022 12:49:33 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.5/8.17.1.5) with ESMTP id 27C8KJh2007035; Fri, 12 Aug 2022 03:47:27 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=15rPy6e45nEKppEI3a8IHfqsKRBQv78ydxSrLoZBchw=; b=SCXsesc1CpqO9/VXHC4QUyP5bRF3MgEFKuB5iu314aL4h4mHjk7MzJo3cN4vUXzWbbga nkVHuECdc9GmH0i2zYVNrdqKXcf7hdqlPVPZgn4nGpt66fk3i8a4TGgUVm11c65zIAkg fvmDv5hB1pLpjRYEsCXdXJIjxtrGDOxbSmnIhObeRNCiOb1vmexAqKDzx/RsZO4t3fJr icp3TtY1nOrEDk/CZgfjRe7V0oBm7dXUSf5oDYlAkpvr8M2C68Kq0BbLhIvybUB9IsQo XZ/laocJkQSDNicjskpvWlAjzoCKTGNYZB9vhPxXGfjWcsn7dfWZuFe526QDEn4nYPIT Tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3hwk8wgdb2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 12 Aug 2022 03:47:27 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Fri, 12 Aug 2022 03:47:25 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Fri, 12 Aug 2022 03:47:25 -0700 Received: from localhost.localdomain (unknown [10.28.36.155]) by maili.marvell.com (Postfix) with ESMTP id 3E8815B6934; Fri, 12 Aug 2022 03:47:21 -0700 (PDT) From: Hanumanth Pothula To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko CC: , , , , , , , , , , , , Hanumanth Pothula Subject: [PATCH v1 1/1] ethdev: introduce pool sort capability Date: Fri, 12 Aug 2022 16:16:48 +0530 Message-ID: <20220812104648.1019978-1-hpothula@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Proofpoint-GUID: GZdH7WmuxhKHuG7u0W9JaXfhKGZMOGB_ X-Proofpoint-ORIG-GUID: GZdH7WmuxhKHuG7u0W9JaXfhKGZMOGB_ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.883,Hydra:6.0.517,FMLib:17.11.122.1 definitions=2022-08-12_08,2022-08-11_01,2022-06-22_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Presently, the 'Buffer Split' feature supports sending multiple segments of the received packet to PMD, which programs the HW to receive the packet in segments from different pools. This patch extends the feature to support the pool sort capability. Some of the HW has support for choosing memory pools based on the packet's size. The pool sort capability allows PMD to choose a memory pool based on the packet's length. This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling effective use of memory. For example, let's say HW has a capability of three pools, - pool-1 size is 2K - pool-2 size is > 2K and < 4K - pool-3 size is > 4K Here, pool-1 can accommodate packets with sizes < 2K pool-2 can accommodate packets with sizes > 2K and < 4K pool-3 can accommodate packets with sizes > 4K With pool sort capability enabled in SW, an application may create three pools of different sizes and send them to PMD. Allowing PMD to program HW based on packet lengths. So that packets with less than 2K are received on pool-1, packets with lengths between 2K and 4K are received on pool-2 and finally packets greater than 4K are received on pool-3. The following two capabilities are added to the rte_eth_rxseg_capa structure, 1. pool_sort --> tells pool sort capability is supported by HW. 2. max_npool --> max number of pools supported by HW. Defined new structure rte_eth_rxseg_sort, to be used only when pool sort capability is present. If required this may be extended further to support more configurations. Signed-off-by: Hanumanth Pothula Change-Id: I5a2485a7919616902c468c767b5c01834d4a2c27 --- lib/ethdev/rte_ethdev.c | 81 ++++++++++++++++++++++++++++++++++++++--- lib/ethdev/rte_ethdev.h | 46 +++++++++++++++++++++-- 2 files changed, 119 insertions(+), 8 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 1979dc0850..e21a651787 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1634,6 +1634,54 @@ rte_eth_dev_is_removed(uint16_t port_id) return ret; } +static int +rte_eth_rx_queue_check_sort(const struct rte_eth_rxseg_sort *rx_seg, + uint16_t n_seg, uint32_t *mbp_buf_size, + const struct rte_eth_dev_info *dev_info) +{ + const struct rte_eth_rxseg_capa *seg_capa = &dev_info->rx_seg_capa; + uint16_t seg_idx; + + if (!seg_capa->multi_pools || n_seg > seg_capa->max_npool) { + RTE_ETHDEV_LOG(ERR, + "Invalid capabilities, multi_pools:%d differnt length segments %u exceed supported %u\n", + seg_capa->multi_pools, n_seg, seg_capa->max_nseg); + return -EINVAL; + } + + for (seg_idx = 0; seg_idx < n_seg; seg_idx++) { + struct rte_mempool *mpl = rx_seg[seg_idx].mp; + uint32_t length = rx_seg[seg_idx].length; + + if (mpl == NULL) { + RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); + return -EINVAL; + } + + if (mpl->private_data_size < + sizeof(struct rte_pktmbuf_pool_private)) { + RTE_ETHDEV_LOG(ERR, + "%s private_data_size %u < %u\n", + mpl->name, mpl->private_data_size, + (unsigned int)sizeof + (struct rte_pktmbuf_pool_private)); + return -ENOSPC; + } + + *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); + length = length != 0 ? length : (*mbp_buf_size - RTE_PKTMBUF_HEADROOM); + if (*mbp_buf_size < length + RTE_PKTMBUF_HEADROOM) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u))\n", + mpl->name, *mbp_buf_size, + length); + return -EINVAL; + } + } + + return 0; +} + static int rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, uint16_t n_seg, uint32_t *mbp_buf_size, @@ -1693,7 +1741,11 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; + /* On segment length == 0, update segment's length with + * the pool's length - headeroom space, to make sure enough + * space is accomidate for header. + **/ + length = length != 0 ? length : (*mbp_buf_size - RTE_PKTMBUF_HEADROOM); if (*mbp_buf_size < length + offset) { RTE_ETHDEV_LOG(ERR, "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", @@ -1765,6 +1817,7 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, } } else { const struct rte_eth_rxseg_split *rx_seg; + const struct rte_eth_rxseg_sort *rx_sort; uint16_t n_seg; /* Extended multi-segment configuration check. */ @@ -1774,13 +1827,31 @@ rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, return -EINVAL; } - rx_seg = (const struct rte_eth_rxseg_split *)rx_conf->rx_seg; n_seg = rx_conf->rx_nseg; if (rx_conf->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { - ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, - &mbp_buf_size, - &dev_info); + ret = -1; /* To make sure at least one of below conditions becomes true */ + + /* Check both NIX and application supports buffer-split capability */ + if (dev_info.rx_seg_capa.mode_flag == RTE_ETH_RXSEG_MODE_SPLIT && + rx_conf->rx_seg->mode_flag == RTE_ETH_RXSEG_MODE_SPLIT) { + rx_seg = (const struct rte_eth_rxseg_split *) + &(rx_conf->rx_seg->split); + ret = rte_eth_rx_queue_check_split(rx_seg, n_seg, + &mbp_buf_size, + &dev_info); + } + + /* Check both NIX and application supports pool-sort capability */ + if (dev_info.rx_seg_capa.mode_flag == RTE_ETH_RXSEG_MODE_SORT && + rx_conf->rx_seg->mode_flag == RTE_ETH_RXSEG_MODE_SORT) { + rx_sort = (const struct rte_eth_rxseg_sort *) + &(rx_conf->rx_seg->sort); + ret = rte_eth_rx_queue_check_sort(rx_sort, n_seg, + &mbp_buf_size, + &dev_info); + } + if (ret != 0) return ret; } else { diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index de9e970d4d..9ff8ba8085 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1204,16 +1204,53 @@ struct rte_eth_rxseg_split { uint32_t reserved; /**< Reserved field. */ }; +/** + * The pool sort capability allows PMD to choose a memory pool based on the + * packet's length. So, basically, PMD programs HW for receiving packets from + * different pools, based on the packet's length. + * + * This is often useful for saving the memory where the application can create + * a different pool to steer the specific size of the packet, thus enabling + * effective use of memory. + */ +struct rte_eth_rxseg_sort { + struct rte_mempool *mp; /**< Memory pool to allocate packets from. */ + uint16_t length; /**< Packet data length. */ + uint32_t reserved; /**< Reserved field. */ +}; + +enum rte_eth_rxseg_mode { + /** + * Buffer split mode: PMD split the received packets into multiple segments. + * @see struct rte_eth_rxseg_split + */ + RTE_ETH_RXSEG_MODE_SPLIT = RTE_BIT64(0), + /** + * Pool sort mode: PMD to chooses a memory pool based on the packet's length. + * @see struct rte_eth_rxseg_sort + */ + RTE_ETH_RXSEG_MODE_SORT = RTE_BIT64(1), +}; + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice. * * A common structure used to describe Rx packet segment properties. */ -union rte_eth_rxseg { +struct rte_eth_rxseg { + + /** + * PMD may support more than one rxseg mode. This allows application + * to chose which mode to enable. + */ + enum rte_eth_rxseg_mode mode_flag; + /* The settings for buffer split offload. */ struct rte_eth_rxseg_split split; - /* The other features settings should be added here. */ + + /*The settings for packet sort offload. */ + struct rte_eth_rxseg_sort sort; }; /** @@ -1246,7 +1283,7 @@ struct rte_eth_rxconf { * The supported capabilities of receiving segmentation is reported * in rte_eth_dev_info.rx_seg_capa field. */ - union rte_eth_rxseg *rx_seg; + struct rte_eth_rxseg *rx_seg; uint64_t reserved_64s[2]; /**< Reserved for future fields */ void *reserved_ptrs[2]; /**< Reserved for future fields */ @@ -1831,6 +1868,9 @@ struct rte_eth_rxseg_capa { uint32_t offset_allowed:1; /**< Supports buffer offsets. */ uint32_t offset_align_log2:4; /**< Required offset alignment. */ uint16_t max_nseg; /**< Maximum amount of segments to split. */ + /* < Maximum amount of pools that PMD can sort based on packet/segment lengths */ + uint16_t max_npool; + enum rte_eth_rxseg_mode mode_flag; /**< supported rxseg modes */ uint16_t reserved; /**< Reserved field. */ }; -- 2.25.1