From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7D563A04FF; Tue, 24 May 2022 17:21:24 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2D55F42BA9; Tue, 24 May 2022 17:21:13 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2048.outbound.protection.outlook.com [40.107.92.48]) by mails.dpdk.org (Postfix) with ESMTP id 7B86342BB0 for ; Tue, 24 May 2022 17:21:11 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BmwALSRIjWVbJNbipgyy/D8XkwN7KwRzX2zmZowd9fQ5NBcfKSkbxfzFqukg31MW/gFLqRbhrfwwnqz0zTMqnh88P7mFDQCIUntHpyAxOUUmDkL2XwGmgWf/Wk8iMN1ZCN7rgj8MALC52tVZAuGzeo2tF+deHbSkI5jXLpRFfepEVDB/vsk3mmrTZhDebBqAZIsNvSdHOcf+Hv6JS7eChoAaoqUkGgE/Iw+6YjGZMHm2zEAHWE4H0OlGzJ73EkszjvwT+SkCOuGIUstpsJRaQf12vlHf2BBeO2S3wfWanCdjXeKUXkcS2Fwg0Hfl7n1eSPte1f3idrKh/JnGRhckdw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5UrcImR+eMI3xDLFZPkikTQrc/LCrDp1rnaQqQLxy1c=; b=O/YQmv5Rtyhf5krUgiLbcBpAQaMV5+0ap8t3MwNeEBV/hROSB3pL5bVoa+EYokyqhkcG9dL/pmte5l8U3Tb4SooZiv9UQB3p18w9uwtSl7PGiIPwwFYpF0u2ayYhj4DP+kbu8IqzVhl9pLEMtitzT+KQtQICeRmhMDwdzv8HgEaDBEsninar9FY4Xe7nmaUXpS+G9cvYuUD6/32rW+sumqM82hLAH208rZVY3ZrVxOqnLezecKCKBSrnwBBP+28eovX1CmAMhBnOlW1oQZ6YyRAzpBRicMspJcBBqt3ZdhNRHcrcZgjFDsEbULiVM1431D+VE88jVC/aWBapwWf/kQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.238) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5UrcImR+eMI3xDLFZPkikTQrc/LCrDp1rnaQqQLxy1c=; b=dY5inGGiCQ6+o+F9vI52lWg73CokSypMitdR5UMiLJcclv5AQTQmteVG78S8+Kvc8Sc41t+kSSRyOlFcdR0XhutibWcDlGpPIU7pOkFpt+CyDggOsYOjmGVasMxYTQf0BWfwADgLyDSILTFI22GXSwyZs2GvJQWTk3/HwdO2v5fLo60wbyrL/AYpC54P7ZfaTRQoDzSdzVae/LJt2AtY4HAjLojzXPVHYThZaVRngAJBZpVCSUD5tdYnwKVuJIvGa1OttDB9u3gu4Ewko4RftqxGBSbRKiQKBXc2n3UvPR8LMKlMWtmOYPtngId0bdj7z8tY8Ml1G4pv4oQioBTe4Q== Received: from MWHPR1201CA0011.namprd12.prod.outlook.com (2603:10b6:301:4a::21) by DM5PR1201MB0233.namprd12.prod.outlook.com (2603:10b6:4:55::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5293.13; Tue, 24 May 2022 15:21:09 +0000 Received: from CO1NAM11FT042.eop-nam11.prod.protection.outlook.com (2603:10b6:301:4a:cafe::ae) by MWHPR1201CA0011.outlook.office365.com (2603:10b6:301:4a::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5273.19 via Frontend Transport; Tue, 24 May 2022 15:21:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.238) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.238 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.238; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.238) by CO1NAM11FT042.mail.protection.outlook.com (10.13.174.250) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5273.14 via Frontend Transport; Tue, 24 May 2022 15:21:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL105.nvidia.com (10.27.9.14) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 24 May 2022 15:21:07 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 24 May 2022 08:21:05 -0700 From: Spike Du To: , , , CC: , Subject: [PATCH v3 4/7] net/mlx5: add LWM event handling support Date: Tue, 24 May 2022 18:20:38 +0300 Message-ID: <20220524152041.737154-5-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220524152041.737154-1-spiked@nvidia.com> References: <20220522055900.417282-1-spiked@nvidia.com> <20220524152041.737154-1-spiked@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: dffe745e-a94f-41c8-17d7-08da3d99072c X-MS-TrafficTypeDiagnostic: DM5PR1201MB0233:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: TbJOIso68lo+oF+lT1+Fx58T+6yXWEFXaKO9vstkhszNOf+mcB9nT5yu7zjt2qqcOHKmHExWus7iVFqvB8jynhw09X4tG0gctUgjQ3L3wFyiRhueWoUjyVHSUrEc40Kq9SYJJWvkuMGY2SZDBZgNTSaXu0PWy4P6inmoKniSc8dyf0lsYztAtxCvvqDajczqa3N/oy+/29mrspi+9dL0pnoXqS2fXE3hfUPAGcBJB8ZGQAVtMYVbFwUWvXApyjdkv8GhO7DtDgbMxL6WrUyt4xEaIVJgr62G7AL1rDREHmUWH3pjRDJ2N8r7iG76Vhi8XeEas4cgFGkrR09N6rC/nEI1i0y7qE2GaYe9xCINZgYujgVB5pvibUzov2UY4D1mnrQNmcVBnNLYBvfZTeWaTwgYlb5nGM3+GSLwph5Wdj90O9UvpR/LVYXk5Z3NIcSdWcBCQ+RxzKLuvxruR8tDnssrsKYedNCK8ZFYHxPmma3N4lvgExlqQeaKJvufRwX9r42y5hnc5u97f8DcKW0T2ViVCce7FQKO1+eXd7qUcxtC7VpJSAkgnioD0ZYjw30/orjXPDxO1SqUkKEIb0p3jL10f2GfBaCAsmqK4WDJ5k396eXdjQSgao2Wbcq+8vvm2yXera1vciRLNmDcA+Gdc/BHGPE73/3PRanJHjOqmFm3l7nayUb0cKJLJkzp6Fj/ffipKK+IKz0QJ5mzKs8dgw== X-Forefront-Antispam-Report: CIP:12.22.5.238; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(7696005)(55016003)(40460700003)(6286002)(26005)(36860700001)(4326008)(70206006)(70586007)(8676002)(86362001)(356005)(5660300002)(508600001)(81166007)(36756003)(83380400001)(2906002)(1076003)(8936002)(316002)(426003)(6666004)(107886003)(2616005)(336012)(47076005)(82310400005)(16526019)(186003)(110136005)(54906003)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2022 15:21:08.4873 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: dffe745e-a94f-41c8-17d7-08da3d99072c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.238]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT042.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR1201MB0233 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When LWM meets RQ WQE, the kernel driver raises an event to SW. Use devx event_channel to catch this and to notify the user. Allocate this channel per shared device. The channel has a cookie that informs the specific event port and queue. Signed-off-by: Spike Du --- drivers/net/mlx5/mlx5.c | 66 ++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5.h | 7 ++++ drivers/net/mlx5/mlx5_devx.c | 47 +++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rx.c | 33 ++++++++++++++++++ drivers/net/mlx5/mlx5_rx.h | 7 ++++ 5 files changed, 160 insertions(+) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f0988712df..e04a66625e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -22,6 +23,7 @@ #include #include #include +#include #include #include @@ -1524,6 +1526,69 @@ mlx5_alloc_shared_dev_ctx(const struct mlx5_dev_spawn_data *spawn, return NULL; } +/** + * Create LWM event_channel and interrupt handle for shared device + * context. All rxqs sharing the device context share the event_channel. + * A callback is registered in interrupt thread to receive the LWM event. + * + * @param[in] priv + * Pointer to mlx5_priv instance. + * + * @return + * 0 on success, negative with rte_errno set. + */ +int +mlx5_lwm_setup(struct mlx5_priv *priv) +{ + int fd_lwm; + + pthread_mutex_init(&priv->sh->lwm_config_lock, NULL); + priv->sh->devx_channel_lwm = mlx5_os_devx_create_event_channel + (priv->sh->cdev->ctx, + MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); + if (!priv->sh->devx_channel_lwm) + goto err; + fd_lwm = mlx5_os_get_devx_channel_fd(priv->sh->devx_channel_lwm); + priv->sh->intr_handle_lwm = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, true, + fd_lwm, mlx5_dev_interrupt_handler_lwm, priv); + if (!priv->sh->intr_handle_lwm) + goto err; + return 0; +err: + if (priv->sh->devx_channel_lwm) { + mlx5_os_devx_destroy_event_channel + (priv->sh->devx_channel_lwm); + priv->sh->devx_channel_lwm = NULL; + } + pthread_mutex_destroy(&priv->sh->lwm_config_lock); + return -rte_errno; +} + +/** + * Destroy LWM event_channel and interrupt handle for shared device + * context before free this context. The interrupt handler is also + * unregistered. + * + * @param[in] sh + * Pointer to shared device context. + */ +void +mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh) +{ + if (sh->intr_handle_lwm) { + mlx5_os_interrupt_handler_destroy(sh->intr_handle_lwm, + mlx5_dev_interrupt_handler_lwm, (void *)-1); + sh->intr_handle_lwm = NULL; + } + if (sh->devx_channel_lwm) { + mlx5_os_devx_destroy_event_channel + (sh->devx_channel_lwm); + sh->devx_channel_lwm = NULL; + } + pthread_mutex_destroy(&sh->lwm_config_lock); +} + /** * Free shared IB device context. Decrement counter and if zero free * all allocated resources and close handles. @@ -1601,6 +1666,7 @@ mlx5_free_shared_dev_ctx(struct mlx5_dev_ctx_shared *sh) claim_zero(mlx5_devx_cmd_destroy(sh->td)); MLX5_ASSERT(sh->geneve_tlv_option_resource == NULL); pthread_mutex_destroy(&sh->txpp.mutex); + mlx5_lwm_unset(sh); mlx5_free(sh); return; exit: diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7ebb2cc961..a76f2fed3d 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1268,6 +1268,9 @@ struct mlx5_dev_ctx_shared { struct mlx5_lb_ctx self_lb; /* QP to enable self loopback for Devx. */ unsigned int flow_max_priority; enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM]; + void *devx_channel_lwm; + struct rte_intr_handle *intr_handle_lwm; + pthread_mutex_t lwm_config_lock; /* Availability of mreg_c's. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; @@ -1405,6 +1408,7 @@ enum mlx5_txq_modify_type { }; struct mlx5_rxq_priv; +struct mlx5_priv; /* HW objects operations structure. */ struct mlx5_obj_ops { @@ -1413,6 +1417,7 @@ struct mlx5_obj_ops { int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj); int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type); void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq); + int (*rxq_event_get_lwm)(struct mlx5_priv *priv, int *rxq_idx, int *port_id); int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n, struct mlx5_ind_table_obj *ind_tbl); int (*ind_table_modify)(struct rte_eth_dev *dev, @@ -1603,6 +1608,8 @@ int mlx5_net_remove(struct mlx5_common_device *cdev); bool mlx5_is_hpf(struct rte_eth_dev *dev); bool mlx5_is_sf_repr(struct rte_eth_dev *dev); void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); +int mlx5_lwm_setup(struct mlx5_priv *priv); +void mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh); /* Macro to iterate over all valid ports for mlx5 driver. */ #define MLX5_ETH_FOREACH_DEV(port_id, dev) \ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index c918a50ae9..6886ae1f22 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -232,6 +232,52 @@ mlx5_rx_devx_get_event(struct mlx5_rxq_obj *rxq_obj) #endif /* HAVE_IBV_DEVX_EVENT */ } +/** + * Get LWM event for shared context, return the correct port/rxq for this event. + * + * @param priv + * Mlx5_priv object. + * @param rxq_idx [out] + * Which rxq gets this event. + * @param port_id [out] + * Which port gets this event. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_rx_devx_get_event_lwm(struct mlx5_priv *priv, int *rxq_idx, int *port_id) +{ +#ifdef HAVE_IBV_DEVX_EVENT + union { + struct mlx5dv_devx_async_event_hdr event_resp; + uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr) + 128]; + } out; + int ret; + + memset(&out, 0, sizeof(out)); + ret = mlx5_glue->devx_get_event(priv->sh->devx_channel_lwm, + &out.event_resp, + sizeof(out.buf)); + if (ret < 0) { + rte_errno = errno; + DRV_LOG(WARNING, "%s err\n", __func__); + return -rte_errno; + } + *port_id = (((uint32_t)out.event_resp.cookie) >> + LWM_COOKIE_PORTID_OFFSET) & LWM_COOKIE_PORTID_MASK; + *rxq_idx = (((uint32_t)out.event_resp.cookie) >> + LWM_COOKIE_RXQID_OFFSET) & LWM_COOKIE_RXQID_MASK; + return 0; +#else + (void)priv; + (void)rxq_idx; + (void)port_id; + rte_errno = ENOTSUP; + return -rte_errno; +#endif /* HAVE_IBV_DEVX_EVENT */ +} + /** * Create a RQ object using DevX. * @@ -1421,6 +1467,7 @@ struct mlx5_obj_ops devx_obj_ops = { .rxq_event_get = mlx5_rx_devx_get_event, .rxq_obj_modify = mlx5_devx_modify_rq, .rxq_obj_release = mlx5_rxq_devx_obj_release, + .rxq_event_get_lwm = mlx5_rx_devx_get_event_lwm, .ind_table_new = mlx5_devx_ind_table_new, .ind_table_modify = mlx5_devx_ind_table_modify, .ind_table_destroy = mlx5_devx_ind_table_destroy, diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e5eea0ad94..7d556c2b45 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -1187,3 +1187,36 @@ mlx5_check_vec_rx_support(struct rte_eth_dev *dev __rte_unused) { return -ENOTSUP; } + +/** + * Rte interrupt handler for LWM event. + * It first checks if the event arrives, if so process the callback for + * RTE_ETH_EVENT_RX_LWM. + * + * @param args + * Generic pointer to mlx5_priv. + */ +void +mlx5_dev_interrupt_handler_lwm(void *args) +{ + struct mlx5_priv *priv = args; + struct mlx5_rxq_priv *rxq; + struct rte_eth_dev *dev; + int ret, rxq_idx = 0, port_id = 0; + + ret = priv->obj_ops.rxq_event_get_lwm(priv, &rxq_idx, &port_id); + if (unlikely(ret < 0)) { + DRV_LOG(WARNING, "Cannot get LWM event context."); + return; + } + DRV_LOG(INFO, "%s get LWM event, port_id:%d rxq_id:%d.", __func__, + port_id, rxq_idx); + dev = &rte_eth_devices[port_id]; + rxq = mlx5_rxq_get(dev, rxq_idx); + if (rxq) { + pthread_mutex_lock(&priv->sh->lwm_config_lock); + rxq->lwm_event_pending = 1; + pthread_mutex_unlock(&priv->sh->lwm_config_lock); + } + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_RX_LWM, NULL); +} diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 25a5f2c1fa..068dff5863 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -176,6 +176,7 @@ struct mlx5_rxq_priv { struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ uint32_t hairpin_status; /* Hairpin binding status. */ uint32_t lwm:16; + uint32_t lwm_event_pending:1; }; /* External RX queue descriptor. */ @@ -295,6 +296,7 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_burst_mode *mode); int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); +void mlx5_dev_interrupt_handler_lwm(void *args); /* Vectorized version of mlx5_rx.c */ int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data); @@ -675,4 +677,9 @@ mlx5_is_external_rxq(struct rte_eth_dev *dev, uint16_t queue_idx) return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED); } +#define LWM_COOKIE_RXQID_OFFSET 0 +#define LWM_COOKIE_RXQID_MASK 0xffff +#define LWM_COOKIE_PORTID_OFFSET 16 +#define LWM_COOKIE_PORTID_MASK 0xffff + #endif /* RTE_PMD_MLX5_RX_H_ */ -- 2.27.0