From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D7D08A0503; Fri, 1 Apr 2022 05:23:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C58B342916; Fri, 1 Apr 2022 05:23:11 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2083.outbound.protection.outlook.com [40.107.92.83]) by mails.dpdk.org (Postfix) with ESMTP id DEA6C42922 for ; Fri, 1 Apr 2022 05:23:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WD7SvdLhUhJ7csYdxb5rfx19pKS3ouHRQaStrci2QMf95N7c4fu6jd4tnn5bHCCSQiAa5BMEaIY1YKLEmVeAFjRzRYaQaXO4gksP3spBLnl2u/Qstm93wek9ktbefMswylxxxyvZkcgoHmGfsjc8cN0nq2b5mdpE0gNLXA0Ik5zXv024ifQ3WYX6iJZf7+UaYzvpPrzzu5IORzAk9/VcTd1e8pfdl6t5QyZq74oscYSnPxWXIW0Zi0VqON89YdJE0JrQZOleNV7TGFyY+zXKsQZqySvDx4jcmodC2Fh09rm4okzvJ2dT4bwduZhC/Dxs7QdYvhcgVeWppCKINbRarA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=kbyPlBvYgRr34smZodUft2+0F6u7Ck49WlR9zylWey0=; b=aevrYKZ5FNE+yDwd3UcukZ7rbkKN48gPBp6sKwam+CMXlMzJNLZV3QXuaKi59tdCj+N4F7N8qy3m3z9LXI7yzb+RomwsRttepz+yGBnnKFXLyOxWCqmGaqcAcwXAqJBFt1aA0wNmodUuLHZaDPwXIPdukG9wm78hVp8q6gr/uX3gtFru30bnf8XPV7pVdegRFdlCEwytuq8RWaJhDR3CTpkC1eAFzANd59rVhBBBRKbJuGA7FQbqRMWyhUZOMaNKNw26M0ZnAzsfmteqgrorq6TcvBUJRDfDD0/F69/8xSFDkty9sbPfysi1RR7pLnoCrxAaNu3SntLE2f4++10ULw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.235) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=kbyPlBvYgRr34smZodUft2+0F6u7Ck49WlR9zylWey0=; b=QhtZHGG8jsLL11PiC04kX1I1YvdodCz/2FuJ7/qxzgP1DuQ64jg+SmqwedsKOPaKKO1hJO+CdfApzAPk5fof+y+v5hJktBMZASXzr44jnYpccnHJDplIKXfOCk8LGW1YZH9alzJaVxll02AUVhXOcvIj3fJWi7R8ZDgOJ99XeNLb4pWThvCIBAH9hHXCCi5zAiV95jOpF6vHwJ+/GbELBprAzTMoqLvm+tKHq8HULDDVbq29WL9VHpQYeB+DDz651UNIs+u/P9/LGU/6SnJjpEBCjeBTmdyHVuCJjzlN25M6LGcM7pSHCwbqmGUqHlpCOQgn1miANPjbxdkWOvc3Gg== Received: from MW4PR02CA0027.namprd02.prod.outlook.com (2603:10b6:303:16d::32) by DM5PR12MB2533.namprd12.prod.outlook.com (2603:10b6:4:b0::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.25; Fri, 1 Apr 2022 03:23:08 +0000 Received: from CO1NAM11FT047.eop-nam11.prod.protection.outlook.com (2603:10b6:303:16d:cafe::5) by MW4PR02CA0027.outlook.office365.com (2603:10b6:303:16d::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5123.22 via Frontend Transport; Fri, 1 Apr 2022 03:23:08 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.235) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.235 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.235; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.235) by CO1NAM11FT047.mail.protection.outlook.com (10.13.174.132) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5123.19 via Frontend Transport; Fri, 1 Apr 2022 03:23:08 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Fri, 1 Apr 2022 03:23:07 +0000 Received: from nvidia.com (10.126.230.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Thu, 31 Mar 2022 20:23:05 -0700 From: Spike Du To: , , , CC: , Subject: [RFC 3/6] net/mlx5: add LWM event handling support Date: Fri, 1 Apr 2022 06:22:29 +0300 Message-ID: <20220401032232.1267376-4-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220401032232.1267376-1-spiked@nvidia.com> References: <20220401032232.1267376-1-spiked@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.230.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7acbb138-8a48-484e-e9d7-08da138ef154 X-MS-TrafficTypeDiagnostic: DM5PR12MB2533:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: V7TqKtxatnTPk/8Nwchp4hagMXkdj2IvNHseBIR0jQCJMLC7HxzKgB6iWyjsKom0FMMadmNJxkzzTn6Qz/StbQCoSSDN1CAvgXb9lbVxDHfaiReDbn+LdBdFzWSVtSRElQRNu26LuIt8cXxyshfTM5ajF2q/AmHvtVNqLN6gC7dkvbQj7NV5BgPHjJZdQd+d8iz0kx1Kb39053kSmyMG4WrFawYpcOILSpisMyv9wiFiX7TPS5jkKnLq3omOCCNu9jUzHxL/CW6Kz82ieXxu4YY4V2sI/F7XZ8YrQLEdAzaYVUodbtqNzoD/7ZSJ6uMs72EMNHIG/CQPoAzrDKyFI6b8oqAlS1wS1XETdS9FSyTKALAsqBYYMK9SOA8Ktu9wkYh/rCx/kUfMy77gyJ3Np8qMD18oZpshtgegcieRq2BYluY34O4+vaxAFZ90NoCB6GTb5kcOl8YD6ccITk8hh9CJBRgGRMFjrNsNw0L2nEX4Gr6J3IxMm3N5lglgjBHsaIqlZDz85lx1oSnRdWDIu8/2/Q9NEDDjKCLU5GwZwzIDK0JyWEysGe2T0uWbmhxV5xl7ibjiUK015uqMVpWjkgSVod4fFaG92wtBqQw/7q3G4vF8tF1MFLTdMZF2zp45rRXvX6nUQW+Ax7xVP4Gf7rzse4iVoRi18w9jRgbKGuNreYNI0HBd8e+ctaRd4RefwmtrnwApz8SRhcoHOUHuMA== X-Forefront-Antispam-Report: CIP:12.22.5.235; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(46966006)(36840700001)(40470700004)(508600001)(55016003)(8936002)(82310400004)(5660300002)(36860700001)(356005)(86362001)(83380400001)(336012)(426003)(7696005)(6286002)(40460700003)(26005)(2616005)(16526019)(186003)(2906002)(107886003)(6666004)(47076005)(1076003)(70586007)(70206006)(316002)(36756003)(81166007)(54906003)(8676002)(110136005)(4326008)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 Apr 2022 03:23:08.0098 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7acbb138-8a48-484e-e9d7-08da138ef154 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.235]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT047.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2533 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When LWM meets RQ WQE, the kernel driver raises an event to SW. Use devx event_channel to catch this and to notify the user. Allocate this channel per shared device. The channel has a cookie that informs the specific event port and queue. Signed-off-by: Spike Du --- drivers/net/mlx5/mlx5.c | 61 ++++++++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5.h | 7 +++++ drivers/net/mlx5/mlx5_devx.c | 47 ++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rx.c | 29 +++++++++++++++++++++ drivers/net/mlx5/mlx5_rx.h | 7 +++++ 5 files changed, 151 insertions(+) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 72b1e35..334223e 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -22,6 +23,7 @@ #include #include #include +#include #include #include @@ -1521,6 +1523,64 @@ struct mlx5_dev_ctx_shared * } /** + * Create LWM event_channel and interrupt handle for shared device + * context. All rxqs sharing the device context share the event_channel. + * A callback is registered in interrupt thread to receive the LWM event. + * + * @param[in] priv + * Pointer to mlx5_priv instance. + * + * @return + * 0 on success, negative with rte_errno set. + */ +int +mlx5_lwm_setup(struct mlx5_priv *priv) +{ + int fd_lwm; + + pthread_mutex_init(&priv->sh->lwm_config_lock, NULL); + priv->sh->devx_channel_lwm = mlx5_os_devx_create_event_channel + (priv->sh->cdev->ctx, + MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); + if (!priv->sh->devx_channel_lwm) + goto err; + fd_lwm = mlx5_os_get_devx_channel_fd(priv->sh->devx_channel_lwm); + priv->sh->intr_handle_lwm = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, true, + fd_lwm, mlx5_dev_interrupt_handler_lwm, priv); + if (!priv->sh->intr_handle_lwm) + goto err; + return 0; +err: + mlx5_lwm_unset(priv->sh); + return -rte_errno; +} + +/** + * Destroy LWM event_channel and interrupt handle for shared device + * context before free this context. The interrupt handler is also + * unregistered. + * + * @param[in] sh + * Pointer to shared device context. + */ +void +mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh) +{ + if (sh->intr_handle_lwm) { + mlx5_os_interrupt_handler_destroy(sh->intr_handle_lwm, + mlx5_dev_interrupt_handler_lwm, (void *)-1); + sh->intr_handle_lwm = NULL; + } + if (sh->devx_channel_lwm) { + mlx5_os_devx_destroy_event_channel + (sh->devx_channel_lwm); + sh->devx_channel_lwm = NULL; + } + pthread_mutex_destroy(&sh->lwm_config_lock); +} + +/** * Free shared IB device context. Decrement counter and if zero free * all allocated resources and close handles. * @@ -1597,6 +1657,7 @@ struct mlx5_dev_ctx_shared * claim_zero(mlx5_devx_cmd_destroy(sh->td)); MLX5_ASSERT(sh->geneve_tlv_option_resource == NULL); pthread_mutex_destroy(&sh->txpp.mutex); + mlx5_lwm_unset(sh); mlx5_free(sh); return; exit: diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 4821ff0..515ff33 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1264,6 +1264,9 @@ struct mlx5_dev_ctx_shared { struct mlx5_lb_ctx self_lb; /* QP to enable self loopback for Devx. */ unsigned int flow_max_priority; enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM]; + void *devx_channel_lwm; + struct rte_intr_handle *intr_handle_lwm; + pthread_mutex_t lwm_config_lock; /* Availability of mreg_c's. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; @@ -1401,6 +1404,7 @@ enum mlx5_txq_modify_type { }; struct mlx5_rxq_priv; +struct mlx5_priv; /* HW objects operations structure. */ struct mlx5_obj_ops { @@ -1409,6 +1413,7 @@ struct mlx5_obj_ops { int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj); int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type); void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq); + int (*rxq_event_get_lwm)(struct mlx5_priv *priv, int *rxq_idx, int *port_id); int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n, struct mlx5_ind_table_obj *ind_tbl); int (*ind_table_modify)(struct rte_eth_dev *dev, @@ -1599,6 +1604,8 @@ int mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev, bool mlx5_is_hpf(struct rte_eth_dev *dev); bool mlx5_is_sf_repr(struct rte_eth_dev *dev); void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); +int mlx5_lwm_setup(struct mlx5_priv *priv); +void mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh); /* Macro to iterate over all valid ports for mlx5 driver. */ #define MLX5_ETH_FOREACH_DEV(port_id, dev) \ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index d6de882..e1e5d2d 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -230,6 +230,52 @@ } /** + * Get LWM event for shared context, return the correct port/rxq for this event. + * + * @param priv + * Mlx5_priv object. + * @param rxq_idx [out] + * Which rxq gets this event. + * @param port_id [out] + * Which port gets this event. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_rx_devx_get_event_lwm(struct mlx5_priv *priv, int *rxq_idx, int *port_id) +{ +#ifdef HAVE_IBV_DEVX_EVENT + union { + struct mlx5dv_devx_async_event_hdr event_resp; + uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr) + 128]; + } out; + int ret; + + memset(&out, 0, sizeof(out)); + ret = mlx5_glue->devx_get_event(priv->sh->devx_channel_lwm, + &out.event_resp, + sizeof(out.buf)); + if (ret < 0) { + rte_errno = errno; + DRV_LOG(WARNING, "%s err\n", __func__); + return -rte_errno; + } + *port_id = (((uint32_t)out.event_resp.cookie) >> + LWM_COOKIE_PORTID_OFFSET) & LWM_COOKIE_PORTID_MASK; + *rxq_idx = (((uint32_t)out.event_resp.cookie) >> + LWM_COOKIE_RXQID_OFFSET) & LWM_COOKIE_RXQID_MASK; + return 0; +#else + (void)priv; + (void)rxq_idx; + (void)port_id; + rte_errno = ENOTSUP; + return -rte_errno; +#endif /* HAVE_IBV_DEVX_EVENT */ +} + +/** * Create a RQ object using DevX. * * @param rxq @@ -1413,6 +1459,7 @@ struct mlx5_obj_ops devx_obj_ops = { .rxq_event_get = mlx5_rx_devx_get_event, .rxq_obj_modify = mlx5_devx_modify_rq, .rxq_obj_release = mlx5_rxq_devx_obj_release, + .rxq_event_get_lwm = mlx5_rx_devx_get_event_lwm, .ind_table_new = mlx5_devx_ind_table_new, .ind_table_modify = mlx5_devx_ind_table_modify, .ind_table_destroy = mlx5_devx_ind_table_destroy, diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e5eea0a..f72364e 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -1187,3 +1187,32 @@ int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc) { return -ENOTSUP; } + +/** + * Rte interrupt handler for LWM event. + * It first checks if the event arrives, if so invokes the callback registered + * for LWM event in the rxq. + * + * @param args + * Generic pointer to mlx5_priv. + */ +void +mlx5_dev_interrupt_handler_lwm(void *args) +{ + struct mlx5_priv *priv = args; + struct mlx5_rxq_priv *rxq; + struct rte_eth_dev *dev; + int ret, rxq_idx = 0, port_id = 0; + + ret = priv->obj_ops.rxq_event_get_lwm(priv, &rxq_idx, &port_id); + if (unlikely(ret < 0)) { + DRV_LOG(WARNING, "Cannot get LWM event context."); + return; + } + DRV_LOG(INFO, "%s get LWM event, port_id:%d rxq_id:%d.", __func__, + port_id, rxq_idx); + dev = &rte_eth_devices[port_id]; + rxq = mlx5_rxq_get(dev, rxq_idx); + if (rxq && rxq->lwm_event_rxq_limit_reached) + rxq->lwm_event_rxq_limit_reached(port_id, rxq_idx); +} diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 98d7cae..bf3c5e1 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -175,6 +175,7 @@ struct mlx5_rxq_priv { struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ uint32_t hairpin_status; /* Hairpin binding status. */ uint32_t lwm:16; + void (*lwm_event_rxq_limit_reached)(uint16_t port_id, uint16_t rxq_id); }; /* External RX queue descriptor. */ @@ -294,6 +295,7 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_burst_mode *mode); int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); +void mlx5_dev_interrupt_handler_lwm(void *args); /* Vectorized version of mlx5_rx.c */ int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data); @@ -674,4 +676,9 @@ uint16_t mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED); } +#define LWM_COOKIE_RXQID_OFFSET 0 +#define LWM_COOKIE_RXQID_MASK 0xffff +#define LWM_COOKIE_PORTID_OFFSET 16 +#define LWM_COOKIE_PORTID_MASK 0xffff + #endif /* RTE_PMD_MLX5_RX_H_ */ -- 1.8.3.1