From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E16AFA054D; Tue, 7 Jun 2022 15:01:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E093D42B71; Tue, 7 Jun 2022 15:01:21 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2040.outbound.protection.outlook.com [40.107.220.40]) by mails.dpdk.org (Postfix) with ESMTP id D1B95427F3 for ; Tue, 7 Jun 2022 15:01:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EXu/dOfaXQyNVMV+ACip/gc0ecHJgIVtGkEVqFNqW9gWE3kHEp2u7FgH1WXYeaSqAojEhRl4njWGTpOIiuspPUtetSMkCQHbmGiWJ9qMoiOsyRXlCmrR4TW0OLVr45hsA2P/nMzH0Hp9lt6e1J4mQinFvjTlM8NA3MwgcgTwiiQcNwEnJIKlLMd7rhcM+iwTFFnWO0wWISMgAzxRysKx+Y1xS8Gaj81LBD9UO90xvVcb4pG+7iVANgTPsmI7OuxsNjBawDQAoiwhH5sTgSkBE+RBZp6m9tOL3/3XJTw9sFYrMLM5+ONm/tdIJL5ibZLk/ugsDX0F53ujGdNwgMdcvg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=MbsAPaGLdnAZcEbMNP8bYAKVJgxu850YoRG/4tij3s8=; b=Xuhmj9ITGEyDIC4Ls6Usq6jOT4/5Lq+bcy5E461Y8XNaHrFEt413AIrvLlo+iatsL3XF5/zez3oW+j7JgXzxmOoCQK9jqefkKe0pBjo7ulMRBt/SfmLEKhE2Ox99aXVz6wg2aNXVIXH9gS3H62O1TWktynYoWwvJFaOUq9zApwlh60qAMvvhIGAC+jIuKCphBqn5tViT9s8J3wsVGZcV/k6u8wVJksvUTdIjg2xZw82v5DziYqKTacFcjwaK6NSsjcnBN7AsCCP+IVKI11CrNMqq4e6NnkeWc2h+v3WRthmSX+k5j27fYFZdOKL4RSXmAM3x9kVuE+fFsOYqEh5XUg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.236) smtp.rcpttodomain=smartsharesystems.com smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=MbsAPaGLdnAZcEbMNP8bYAKVJgxu850YoRG/4tij3s8=; b=SPL58IdBLfhEJZHkwzslpfoUhA3kmF9TI7wzrJaYL8jo8zZezzjxqU3xnbWXeChL+Mdbs+Mv2J/i5by/VWb1q4sWuiOJp8u8QqouJ0dXUhvvP8UmInuj4b8pAHGKF2d3ymoYO+g9LsXwuzp0Qhr1sSi729XhiT6WFEVBKlKl8NikE7g+85KzyGEhW8r4cD8xg3tetLdozFSIxcJtBFuI21Qga5jAN7qm80rNXaoCSyJKWwc1FkMerURArcTFv6jCmw0llRBLfZiJHkwpMALVsMhiLk+fw2MH0gqiQBWwwz4HjHLPsCRT3ULlW2/TTh1a13dagM1tTwJTbF6chF1bWA== Received: from DM6PR02CA0055.namprd02.prod.outlook.com (2603:10b6:5:177::32) by MWHPR12MB1168.namprd12.prod.outlook.com (2603:10b6:300:e::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.13; Tue, 7 Jun 2022 13:01:18 +0000 Received: from DM6NAM11FT044.eop-nam11.prod.protection.outlook.com (2603:10b6:5:177:cafe::ff) by DM6PR02CA0055.outlook.office365.com (2603:10b6:5:177::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5314.15 via Frontend Transport; Tue, 7 Jun 2022 13:01:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.236) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.236 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.236; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (12.22.5.236) by DM6NAM11FT044.mail.protection.outlook.com (10.13.173.185) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5314.12 via Frontend Transport; Tue, 7 Jun 2022 13:01:17 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by DRHQMAIL109.nvidia.com (10.27.9.19) with Microsoft SMTP Server (TLS) id 15.0.1497.32; Tue, 7 Jun 2022 13:00:13 +0000 Received: from nvidia.com (10.126.231.35) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.22; Tue, 7 Jun 2022 06:00:10 -0700 From: Spike Du To: , , , CC: , , , , Subject: [PATCH v5 4/7] net/mlx5: add LWM event handling support Date: Tue, 7 Jun 2022 15:59:39 +0300 Message-ID: <20220607125942.241379-5-spiked@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20220607125942.241379-1-spiked@nvidia.com> References: <20220603124821.1148119-1-spiked@nvidia.com> <20220607125942.241379-1-spiked@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [10.126.231.35] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1fb38238-db76-4640-ba41-08da4885cfa7 X-MS-TrafficTypeDiagnostic: MWHPR12MB1168:EE_ X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: A7bdX2XbZsTa2zK8GGqKkZ6rRnppnRAVfePq/sCGfFYYlII03cpIJNKSK9KfE82h4TdAuMlHYsPCwXPzoTwFUWFl3NVW7xHihmtiSRpTuYcLyU6D2jKpTOWp3Ky7KaE+y9u7ENb9XprhkkiRqL5jJstHx2uZMMVBmfD5HrXCyWryLpIGKBaDB2B2DKioX0Ez0BYadPeZgycRTyGUWePgNLo3UYXfNBJEFubUIfYhieg4S5iprzzgYPfJHW6NUHAhmYcX2zZe4Ztsl5P4ShZV4ayrZXof4W3FGvg5DbwAyZ7yoooAMNWiSQ+z3+cCmwqiKOYoY95WIYD3TDhZE9p/8QUuPA7rpH6NXePN8Ieb2JqgdedGu01KAQcDHI6vhpw9B1GB16XNamZHIBbl1K0qmeiNhusHCes6oMBePiTA+84opd9p4ErgKK+3/c9e9I1dHPbAUlhlj5OVNcYOimaMZPwQ2iOy6P4w0ixHOm6ymz7SAF53HsS6FltpplrLohv0VWBpOaoydCCTQC/gsSrbsBsAnqizI5fddXOGYCVmfOfXkZ3TulijUVIgWJzG7pZqyVQfXFBuvpvMBpqcma3UL/mebc6nNUciKR2Md37IY7KRl25PcZHkMzx5Oq7k3IU3sRg06q7qwqJUMe8SvmEg4gT1IZflUpKgdpzuvIDhmbgSC3nGdYN9fVIs2ph/skcIqhaC8cZH3VcAqwyW3tD5Qw== X-Forefront-Antispam-Report: CIP:12.22.5.236; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(46966006)(36840700001)(54906003)(7696005)(83380400001)(40460700003)(2906002)(4326008)(81166007)(36756003)(508600001)(110136005)(356005)(55016003)(107886003)(26005)(8676002)(5660300002)(316002)(2616005)(6666004)(8936002)(6286002)(70206006)(36860700001)(70586007)(426003)(336012)(86362001)(47076005)(16526019)(186003)(1076003)(82310400005)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2022 13:01:17.6666 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1fb38238-db76-4640-ba41-08da4885cfa7 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.236]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT044.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR12MB1168 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When LWM meets RQ WQE, the kernel driver raises an event to SW. Use devx event_channel to catch this and to notify the user. Allocate this channel per shared device. The channel has a cookie that informs the specific event port and queue. Signed-off-by: Spike Du --- drivers/net/mlx5/mlx5.c | 66 ++++++++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5.h | 7 +++++ drivers/net/mlx5/mlx5_devx.c | 47 +++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5_rx.c | 33 ++++++++++++++++++++++ drivers/net/mlx5/mlx5_rx.h | 7 +++++ 5 files changed, 160 insertions(+) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index f098871..e04a666 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include @@ -22,6 +23,7 @@ #include #include #include +#include #include #include @@ -1525,6 +1527,69 @@ struct mlx5_dev_ctx_shared * } /** + * Create LWM event_channel and interrupt handle for shared device + * context. All rxqs sharing the device context share the event_channel. + * A callback is registered in interrupt thread to receive the LWM event. + * + * @param[in] priv + * Pointer to mlx5_priv instance. + * + * @return + * 0 on success, negative with rte_errno set. + */ +int +mlx5_lwm_setup(struct mlx5_priv *priv) +{ + int fd_lwm; + + pthread_mutex_init(&priv->sh->lwm_config_lock, NULL); + priv->sh->devx_channel_lwm = mlx5_os_devx_create_event_channel + (priv->sh->cdev->ctx, + MLX5DV_DEVX_CREATE_EVENT_CHANNEL_FLAGS_OMIT_EV_DATA); + if (!priv->sh->devx_channel_lwm) + goto err; + fd_lwm = mlx5_os_get_devx_channel_fd(priv->sh->devx_channel_lwm); + priv->sh->intr_handle_lwm = mlx5_os_interrupt_handler_create + (RTE_INTR_INSTANCE_F_SHARED, true, + fd_lwm, mlx5_dev_interrupt_handler_lwm, priv); + if (!priv->sh->intr_handle_lwm) + goto err; + return 0; +err: + if (priv->sh->devx_channel_lwm) { + mlx5_os_devx_destroy_event_channel + (priv->sh->devx_channel_lwm); + priv->sh->devx_channel_lwm = NULL; + } + pthread_mutex_destroy(&priv->sh->lwm_config_lock); + return -rte_errno; +} + +/** + * Destroy LWM event_channel and interrupt handle for shared device + * context before free this context. The interrupt handler is also + * unregistered. + * + * @param[in] sh + * Pointer to shared device context. + */ +void +mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh) +{ + if (sh->intr_handle_lwm) { + mlx5_os_interrupt_handler_destroy(sh->intr_handle_lwm, + mlx5_dev_interrupt_handler_lwm, (void *)-1); + sh->intr_handle_lwm = NULL; + } + if (sh->devx_channel_lwm) { + mlx5_os_devx_destroy_event_channel + (sh->devx_channel_lwm); + sh->devx_channel_lwm = NULL; + } + pthread_mutex_destroy(&sh->lwm_config_lock); +} + +/** * Free shared IB device context. Decrement counter and if zero free * all allocated resources and close handles. * @@ -1601,6 +1666,7 @@ struct mlx5_dev_ctx_shared * claim_zero(mlx5_devx_cmd_destroy(sh->td)); MLX5_ASSERT(sh->geneve_tlv_option_resource == NULL); pthread_mutex_destroy(&sh->txpp.mutex); + mlx5_lwm_unset(sh); mlx5_free(sh); return; exit: diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7ebb2cc..a76f2fe 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -1268,6 +1268,9 @@ struct mlx5_dev_ctx_shared { struct mlx5_lb_ctx self_lb; /* QP to enable self loopback for Devx. */ unsigned int flow_max_priority; enum modify_reg flow_mreg_c[MLX5_MREG_C_NUM]; + void *devx_channel_lwm; + struct rte_intr_handle *intr_handle_lwm; + pthread_mutex_t lwm_config_lock; /* Availability of mreg_c's. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; @@ -1405,6 +1408,7 @@ enum mlx5_txq_modify_type { }; struct mlx5_rxq_priv; +struct mlx5_priv; /* HW objects operations structure. */ struct mlx5_obj_ops { @@ -1413,6 +1417,7 @@ struct mlx5_obj_ops { int (*rxq_event_get)(struct mlx5_rxq_obj *rxq_obj); int (*rxq_obj_modify)(struct mlx5_rxq_priv *rxq, uint8_t type); void (*rxq_obj_release)(struct mlx5_rxq_priv *rxq); + int (*rxq_event_get_lwm)(struct mlx5_priv *priv, int *rxq_idx, int *port_id); int (*ind_table_new)(struct rte_eth_dev *dev, const unsigned int log_n, struct mlx5_ind_table_obj *ind_tbl); int (*ind_table_modify)(struct rte_eth_dev *dev, @@ -1603,6 +1608,8 @@ int mlx5_udp_tunnel_port_add(struct rte_eth_dev *dev, bool mlx5_is_hpf(struct rte_eth_dev *dev); bool mlx5_is_sf_repr(struct rte_eth_dev *dev); void mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh); +int mlx5_lwm_setup(struct mlx5_priv *priv); +void mlx5_lwm_unset(struct mlx5_dev_ctx_shared *sh); /* Macro to iterate over all valid ports for mlx5 driver. */ #define MLX5_ETH_FOREACH_DEV(port_id, dev) \ diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index c918a50..6886ae1 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -233,6 +233,52 @@ } /** + * Get LWM event for shared context, return the correct port/rxq for this event. + * + * @param priv + * Mlx5_priv object. + * @param rxq_idx [out] + * Which rxq gets this event. + * @param port_id [out] + * Which port gets this event. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +static int +mlx5_rx_devx_get_event_lwm(struct mlx5_priv *priv, int *rxq_idx, int *port_id) +{ +#ifdef HAVE_IBV_DEVX_EVENT + union { + struct mlx5dv_devx_async_event_hdr event_resp; + uint8_t buf[sizeof(struct mlx5dv_devx_async_event_hdr) + 128]; + } out; + int ret; + + memset(&out, 0, sizeof(out)); + ret = mlx5_glue->devx_get_event(priv->sh->devx_channel_lwm, + &out.event_resp, + sizeof(out.buf)); + if (ret < 0) { + rte_errno = errno; + DRV_LOG(WARNING, "%s err\n", __func__); + return -rte_errno; + } + *port_id = (((uint32_t)out.event_resp.cookie) >> + LWM_COOKIE_PORTID_OFFSET) & LWM_COOKIE_PORTID_MASK; + *rxq_idx = (((uint32_t)out.event_resp.cookie) >> + LWM_COOKIE_RXQID_OFFSET) & LWM_COOKIE_RXQID_MASK; + return 0; +#else + (void)priv; + (void)rxq_idx; + (void)port_id; + rte_errno = ENOTSUP; + return -rte_errno; +#endif /* HAVE_IBV_DEVX_EVENT */ +} + +/** * Create a RQ object using DevX. * * @param rxq @@ -1421,6 +1467,7 @@ struct mlx5_obj_ops devx_obj_ops = { .rxq_event_get = mlx5_rx_devx_get_event, .rxq_obj_modify = mlx5_devx_modify_rq, .rxq_obj_release = mlx5_rxq_devx_obj_release, + .rxq_event_get_lwm = mlx5_rx_devx_get_event_lwm, .ind_table_new = mlx5_devx_ind_table_new, .ind_table_modify = mlx5_devx_ind_table_modify, .ind_table_destroy = mlx5_devx_ind_table_destroy, diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c index e5eea0a..197d708 100644 --- a/drivers/net/mlx5/mlx5_rx.c +++ b/drivers/net/mlx5/mlx5_rx.c @@ -1187,3 +1187,36 @@ int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc) { return -ENOTSUP; } + +/** + * Rte interrupt handler for LWM event. + * It first checks if the event arrives, if so process the callback for + * RTE_ETH_EVENT_RX_LWM. + * + * @param args + * Generic pointer to mlx5_priv. + */ +void +mlx5_dev_interrupt_handler_lwm(void *args) +{ + struct mlx5_priv *priv = args; + struct mlx5_rxq_priv *rxq; + struct rte_eth_dev *dev; + int ret, rxq_idx = 0, port_id = 0; + + ret = priv->obj_ops.rxq_event_get_lwm(priv, &rxq_idx, &port_id); + if (unlikely(ret < 0)) { + DRV_LOG(WARNING, "Cannot get LWM event context."); + return; + } + DRV_LOG(INFO, "%s get LWM event, port_id:%d rxq_id:%d.", __func__, + port_id, rxq_idx); + dev = &rte_eth_devices[port_id]; + rxq = mlx5_rxq_get(dev, rxq_idx); + if (rxq) { + pthread_mutex_lock(&priv->sh->lwm_config_lock); + rxq->lwm_event_pending = 1; + pthread_mutex_unlock(&priv->sh->lwm_config_lock); + } + rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_RX_AVAIL_THRESH, NULL); +} diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 25a5f2c..068dff5 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -176,6 +176,7 @@ struct mlx5_rxq_priv { struct rte_eth_hairpin_conf hairpin_conf; /* Hairpin configuration. */ uint32_t hairpin_status; /* Hairpin binding status. */ uint32_t lwm:16; + uint32_t lwm_event_pending:1; }; /* External RX queue descriptor. */ @@ -295,6 +296,7 @@ void mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, int mlx5_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_burst_mode *mode); int mlx5_get_monitor_addr(void *rx_queue, struct rte_power_monitor_cond *pmc); +void mlx5_dev_interrupt_handler_lwm(void *args); /* Vectorized version of mlx5_rx.c */ int mlx5_rxq_check_vec_support(struct mlx5_rxq_data *rxq_data); @@ -675,4 +677,9 @@ uint16_t mlx5_rx_burst_mprq_vec(void *dpdk_rxq, struct rte_mbuf **pkts, return !!__atomic_load_n(&rxq->refcnt, __ATOMIC_RELAXED); } +#define LWM_COOKIE_RXQID_OFFSET 0 +#define LWM_COOKIE_RXQID_MASK 0xffff +#define LWM_COOKIE_PORTID_OFFSET 16 +#define LWM_COOKIE_PORTID_MASK 0xffff + #endif /* RTE_PMD_MLX5_RX_H_ */ -- 1.8.3.1