From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 76831A0560; Fri, 3 Jun 2022 14:49:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B22A642BA6; Fri, 3 Jun 2022 14:49:04 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2085.outbound.protection.outlook.com [40.107.220.85]) by mails.dpdk.org (Postfix) with ESMTP id 8FDBD42BAF for ; Fri, 3 Jun 2022 14:49:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EcjtGbm6KWnrvUmiROGTWrUQVGRn2S/c8ykq59MqFDZGc2pv20Figfeo1JFE1YABfrRAemNQdA1obWY3+Zb7ZVFDB91+negNkm38pvqG1imZFhrI4jGr34ZaUv6t7870H5g2xib058sVHats9+W+vAl4ciHnVMe5A0Tzq/3d79FQiSdxKzIt756ABdRZsKk8C+rKiTbhXrAbt8kTABcZvY5qzHi/EmzwO6bgP8DxzgJ/+n4t3geQ+j/q5vxT7ucfAqbxdSkgA3h+4NZH2Mq3jK+OOEruOUgH6oLndYUVgqKLzbdAvq6F/1Rda9qECKQ87BqOodi1IWRiepMfhb8/4A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=JMDA7ebM17MPg3q2FjuKKF966yXnWWgzoFrLNTkcxIA=; b=dbZhJGlUyhU1Jry+plN8aQa95aIIleGKMqCtKHHLwfXVcKinT0hBxvKI4TEwpVumY/YAA3bwL0+6Nc8v7lbPApuvkm1BLiAl8BfrTn4BzO2YK9U6FoONOE/0bH8jdMzW7s4SQJUl58DV20bFg86eBqT9G5eJimjIq58ZkgjHqSf18mHLeh2iE8pzmOWFZJ5yV9rWY8ZNiTDpfJPDZu7SOS7lQ4PXWnW9WXa6s/6jAR+MdFyazkK8Uon5oLDQYmeVfe3eOipzOMQYwsIG7zGeE8W44Ae1WWFDPZaNpqO/TRuXzzocMDLCN347LXQ5eAfAxToiHW8/M8IpvV5MhaWdMw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=smartsharesystems.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=JMDA7ebM17MPg3q2FjuKKF966yXnWWgzoFrLNTkcxIA=; b=rpEbk8c74SCHp1/a2D/DXepUYkXtMpjqfG5RRqJhWYFuHaGrNODrKwM5QruZKU4j3PtREh6FsWomb8503LlvtFshmMC/Jq8wxSHqWyK8Visks8wWRdOk0k1DuvbVsWYbwxiA5TVP8pjhAC41/Zbc7dn9nuzw8Hg1Fe8UHkh47vZTe7WJLJloK/kZZV5dTkLR9T5VuX+1j8pmtGGsv64IQhLCvwWBsjz7GShTbd8iGmY84wrbkXAvPvleJFxo9ndgzPSgfD0V/kqRahGHstoqJ+7+Q6SgFjulH03sYoaaR19cZFDEcVTBawC37UJZch5lXyGhMGNPp3u8eyAymRjAeA== Received: from MWHPR1701CA0015.namprd17.prod.outlook.com (2603:10b6:301:14::25) by MN2PR12MB3533.namprd12.prod.outlook.com (2603:10b6:208:107::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.12; Fri, 3 Jun 2022 12:49:00 +0000 Received: from CO1NAM11FT034.eop-nam11.prod.protection.outlook.com (2603:10b6:301:14:cafe::fd) by MWHPR1701CA0015.outlook.office365.com (2603:10b6:301:14::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15 via Frontend Transport; Fri, 3 Jun 2022 12:48:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by CO1NAM11FT034.mail.protection.outlook.com (10.13.174.248) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Fri, 3 Jun 2022 12:48:58 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 3 Jun 2022 12:48:57 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Fri, 3 Jun 2022 05:48:54 -0700 From: Spike Du To: , , , , Shahaf Shuler CC: , , , , Subject: [PATCH v4 5/7] net/mlx5: support Rx queue based fill threshold Date: Fri, 3 Jun 2022 15:48:19 +0300 Message-ID: <20220603124821.1148119-6-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220603124821.1148119-1-spiked@nvidia.com> References: <20220524152041.737154-1-spiked@nvidia.com> <20220603124821.1148119-1-spiked@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 02748fa6-d647-484a-e2f4-08da455f6d6b X-MS-TrafficTypeDiagnostic: MN2PR12MB3533:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: l2BfSiXkrTu0zikSUEs1wBeeDRhz6/MkmkBdfKu7LSY+uolZns/GJ3Zlj+EIP5fmxejIvpPORMlvIKDfs+GIXAgIc2AL2ZdQ1si5bCzaBwSn2nlyJb7z3DF3NVhrboNFPUJnc5qEAq2CsF3EVgFTozPtk7CjEr8ytnomKkJyHwL4QlAzdKHO5IQwfMsg5AbNhsjLwlDQE0X1McmuPGwAjc8gHF+HsmCkoGI2zqI2ia7T0+1Hb+VqIm4jw5xDh576GTuIcosBDLVmMMirqWJfbJUMMFrlBOOdethtFjrhALbDKdryv6/ZikJzoxRKaD4uYlNyO4XLt2HihURrb16ik7tyHmmMTpKIblaAuKG7ac+WCfC51Jwp+uw0e9+F9gk6s2ORxOc++yEz+gBwucCtr0u4TN1Zw9L8AtFln/Yz6ZgbXaz1NHOpNyMOL9GpGG/gm7WaQSrPvDxg35uz/NgCQeYyXxy5dP/kJ3qjnWGi4c6kdLhKkrIYsoAxikMCG7nCq0KiKaTseSt9t7rbwtDP0mGkMHMV42p+a/H2dOOVnKaQF8q+k/QQzesBV31R0+pDbKPwm8YQEYFqA8otFUe8rkDiaU0FEJR/keraoGRhg2jNBP9fi6y2U16Z/ZyJF4ceJAvQ0cgVFy+pvEieWt3FbL7kU7bedEgmFN4IljOah+qkxCqheJuI3qtWAML4YVT2B+39QOFW8YZYbotUQyY/5w== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(86362001)(508600001)(2906002)(6666004)(7696005)(5660300002)(8936002)(26005)(6286002)(40460700003)(8676002)(4326008)(70206006)(70586007)(54906003)(2616005)(110136005)(6636002)(55016003)(316002)(82310400005)(81166007)(83380400001)(36756003)(336012)(47076005)(107886003)(16526019)(426003)(1076003)(186003)(36860700001)(356005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2022 12:48:58.4936 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 02748fa6-d647-484a-e2f4-08da455f6d6b X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT034.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB3533 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add mlx5 specific fill threshold configuration and query handler. In mlx5 PMD, fill threshold is also called LWM(limit watermark). While the Rx queue fullness reaches the LWM limit, the driver catches an HW event and invokes the user callback. The query handler finds the next RX queue with pending LWM event if any, starting from the given RX queue index. Signed-off-by: Spike Du --- doc/guides/nics/mlx5.rst | 12 +++ doc/guides/rel_notes/release_22_07.rst | 1 + drivers/common/mlx5/mlx5_prm.h | 1 + drivers/net/mlx5/mlx5.c | 2 + drivers/net/mlx5/mlx5_rx.c | 156 +++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rx.h | 5 ++ 6 files changed, 177 insertions(+) diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst index d83c56d..ea393fb 100644 --- a/doc/guides/nics/mlx5.rst +++ b/doc/guides/nics/mlx5.rst @@ -93,6 +93,7 @@ Features - Connection tracking. - Sub-Function representors. - Sub-Function. +- Rx queue fill threshold configuration. Limitations @@ -520,6 +521,9 @@ Limitations - The NIC egress flow rules on representor port are not supported. +- Fill threshold: + + - Doesn't support shared Rx queue and Hairpin Rx queue. Statistics ---------- @@ -1680,3 +1684,11 @@ The procedure below is an example of using a ConnectX-5 adapter card (pf0) with #. For each VF PCIe, using the following command to bind the driver:: $ echo "0000:82:00.2" >> /sys/bus/pci/drivers/mlx5_core/bind + +Fill threshold introduction +---------------- + +Fill threshold is a per Rx queue attribute, it should be configured as +a percentage of the Rx queue size. +When Rx queue fullness is above the threshold, an event is sent to PMD. + diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index 0ed4f92..62a8874 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -89,6 +89,7 @@ New Features * Added support for promiscuous mode on Windows. * Added support for MTU on Windows. * Added matching and RSS on IPsec ESP. + * Added Rx queue fill threshold support. * **Updated Marvell cnxk crypto driver.** diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 630b2c5..3b5e605 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -3293,6 +3293,7 @@ struct mlx5_aso_wqe { enum { MLX5_EVENT_TYPE_OBJECT_CHANGE = 0x27, + MLX5_EVENT_TYPE_SRQ_LIMIT_REACHED = 0x14, }; enum { diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index e04a666..a4a39ab 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -2071,6 +2071,8 @@ struct mlx5_dev_ctx_shared * .dev_supported_ptypes_get = mlx5_dev_supported_ptypes_get, .vlan_filter_set = mlx5_vlan_filter_set, .rx_queue_setup = mlx5_rx_queue_setup, + .rx_queue_fill_thresh_set = mlx5_rx_queue_lwm_set, + .rx_queue_fill_thresh_query = mlx5_rx_queue_lwm_query, .rx_hairpin_queue_setup = mlx5_rx_hairpin_queue_setup, .tx_queue_setup = mlx5_tx_queue_setup, .tx_hairpin_queue_setup = mlx5_tx_hairpin_queue_setup, diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index aacb43e..4099496 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -19,12 +19,14 @@ #include #include #include +#include #include "mlx5_autoconf.h" #include "mlx5_defs.h" #include "mlx5.h" #include "mlx5_utils.h" #include "mlx5_rxtx.h" +#include "mlx5_devx.h" #include "mlx5_rx.h" @@ -128,6 +130,17 @@ return RTE_ETH_RX_DESC_AVAIL; } +/* Get rxq lwm percentage according to lwm number. */ +static uint8_t +mlx5_rxq_lwm_to_percentage(struct mlx5_rxq_priv *rxq) +{ + struct mlx5_rxq_data *rxq_data = &rxq->ctrl->rxq; + uint32_t wqe_cnt = 1 << (rxq_data->elts_n - rxq_data->sges_n); + + /* ethdev LWM describes fullness, mlx5 LWM describes emptiness. */ + return rxq->lwm ? (100 - rxq->lwm * 100 / wqe_cnt) : 0; +} + /** * DPDK callback to get the RX queue information. * @@ -150,6 +163,7 @@ { struct mlx5_rxq_ctrl *rxq_ctrl = mlx5_rxq_ctrl_get(dev, rx_queue_id); struct mlx5_rxq_data *rxq = mlx5_rxq_data_get(dev, rx_queue_id); + struct mlx5_rxq_priv *rxq_priv = mlx5_rxq_get(dev, rx_queue_id); if (!rxq) return; @@ -169,6 +183,8 @@ qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? RTE_BIT32(rxq->elts_n) * RTE_BIT32(rxq->log_strd_num) : RTE_BIT32(rxq->elts_n); + qinfo->fill_thresh = rxq_priv ? + mlx5_rxq_lwm_to_percentage(rxq_priv) : 0; } /** @@ -1188,6 +1204,34 @@ int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc) return -ENOTSUP; } +int +mlx5_rx_queue_lwm_query(struct rte_eth_dev *dev, + uint16_t *queue_id, uint8_t *lwm) +{ + struct mlx5_priv *priv = dev->data->dev_private; + unsigned int rxq_id, found = 0, n; + struct mlx5_rxq_priv *rxq; + + if (!queue_id) + return -EINVAL; + /* Query all the Rx queues of the port in a circular way. */ + for (rxq_id = *queue_id, n = 0; n < priv->rxqs_n; n++) { + rxq = mlx5_rxq_get(dev, rxq_id); + if (rxq && rxq->lwm_event_pending) { + pthread_mutex_lock(&priv->sh->lwm_config_lock); + rxq->lwm_event_pending = 0; + pthread_mutex_unlock(&priv->sh->lwm_config_lock); + *queue_id = rxq_id; + found = 1; + if (lwm) + *lwm = mlx5_rxq_lwm_to_percentage(rxq); + break; + } + rxq_id = (rxq_id + 1) % priv->rxqs_n; + } + return found; +} + /** * Rte interrupt handler for LWM event. * It first checks if the event arrives, if so process the callback for @@ -1220,3 +1264,115 @@ int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc) } rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_RX_FILL_THRESH, NULL); } + +/** + * DPDK callback to arm an Rx queue LWM(limit watermark) event. + * While the Rx queue fullness reaches the LWM limit, the driver catches + * an HW event and invokes the user event callback. + * After the last event handling, the user needs to call this API again + * to arm an additional event. + * + * @param dev + * Pointer to the device structure. + * @param[in] rx_queue_id + * Rx queue identificator. + * @param[in] lwm + * The LWM value, is defined by a percentage of the Rx queue size. + * [1-99] to set a new LWM (update the old value). + * 0 to unarm the event. + * + * @return + * 0 : operation success. + * Otherwise: + * - ENOMEM - not enough memory to create LWM event channel. + * - EINVAL - the input Rxq is not created by devx. + * - E2BIG - lwm is bigger than 99. + */ +int +mlx5_rx_queue_lwm_set(struct rte_eth_dev *dev, uint16_t rx_queue_id, + uint8_t lwm) +{ + struct mlx5_priv *priv = dev->data->dev_private; + uint16_t port_id = PORT_ID(priv); + struct mlx5_rxq_priv *rxq = mlx5_rxq_get(dev, rx_queue_id); + uint16_t event_nums[1] = {MLX5_EVENT_TYPE_SRQ_LIMIT_REACHED}; + struct mlx5_rxq_data *rxq_data; + uint32_t wqe_cnt; + uint64_t cookie; + int ret = 0; + + if (!rxq) { + rte_errno = EINVAL; + return -rte_errno; + } + rxq_data = &rxq->ctrl->rxq; + /* Ensure the Rq is created by devx. */ + if (priv->obj_ops.rxq_obj_new != devx_obj_ops.rxq_obj_new) { + rte_errno = EINVAL; + return -rte_errno; + } + if (lwm > 99) { + DRV_LOG(WARNING, "Too big LWM configuration."); + rte_errno = E2BIG; + return -rte_errno; + } + /* Start config LWM. */ + pthread_mutex_lock(&priv->sh->lwm_config_lock); + if (rxq->lwm == 0 && lwm == 0) { + /* Both old/new values are 0, do nothing. */ + ret = 0; + goto end; + } + wqe_cnt = 1 << (rxq_data->elts_n - rxq_data->sges_n); + if (lwm) { + if (!priv->sh->devx_channel_lwm) { + ret = mlx5_lwm_setup(priv); + if (ret) { + DRV_LOG(WARNING, + "Failed to create shared_lwm."); + rte_errno = ENOMEM; + ret = -rte_errno; + goto end; + } + } + if (!rxq->lwm_devx_subscribed) { + cookie = ((uint32_t) + (port_id << LWM_COOKIE_PORTID_OFFSET)) | + (rx_queue_id << LWM_COOKIE_RXQID_OFFSET); + ret = mlx5_os_devx_subscribe_devx_event + (priv->sh->devx_channel_lwm, + rxq->devx_rq.rq->obj, + sizeof(event_nums), + event_nums, + cookie); + if (ret) { + rte_errno = rte_errno ? rte_errno : EINVAL; + ret = -rte_errno; + goto end; + } + rxq->lwm_devx_subscribed = 1; + } + } + /* The ethdev LWM describes fullness, mlx5 lwm describes emptiness. */ + if (lwm) + lwm = 100 - lwm; + /* Save LWM to rxq and send modify_rq devx command. */ + rxq->lwm = lwm * wqe_cnt / 100; + /* Prevent integer division loss when switch lwm number to percentage. */ + if (lwm && (lwm * wqe_cnt % 100)) { + rxq->lwm = ((uint32_t)(rxq->lwm + 1) >= wqe_cnt) ? + rxq->lwm : (rxq->lwm + 1); + } + if (lwm && !rxq->lwm) { + /* With mprq, wqe_cnt may be < 100. */ + DRV_LOG(WARNING, "Too small LWM configuration."); + rte_errno = EINVAL; + ret = -rte_errno; + goto end; + } + ret = mlx5_devx_modify_rq(rxq, MLX5_RXQ_MOD_RDY2RDY); +end: + pthread_mutex_unlock(&priv->sh->lwm_config_lock); + return ret; +} + diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 068dff5..e078aaf 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -177,6 +177,7 @@ struct mlx5_rxq_priv { uint32_t hairpin_status; /* Hairpin binding status. */ uint32_t lwm:16; uint32_t lwm_event_pending:1; + uint32_t lwm_devx_subscribed:1; }; /* External RX queue descriptor. */ @@ -297,6 +298,10 @@ int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_burst_mode *mode); int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); void mlx5_dev_interrupt_handler_lwm(void *args); +int mlx5_rx_queue_lwm_set(struct rte_eth_dev *dev, uint16_t rx_queue_id, + uint8_t lwm); +int mlx5_rx_queue_lwm_query(struct rte_eth_dev *dev, uint16_t *rx_queue_id, + uint8_t *lwm); /* Vectorized version of mlx5_rx.c */ int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data); -- 1.8.3.1