From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C046EA0A02; Thu, 13 May 2021 13:13:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 05407410DB; Thu, 13 May 2021 13:13:53 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2044.outbound.protection.outlook.com [40.107.223.44]) by mails.dpdk.org (Postfix) with ESMTP id 3B0104003F; Thu, 13 May 2021 13:13:51 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EIIb7khWdlmWYgkWfd/CtQ1MBWlzqyk2TLC4ynj0du797sxzqSGnPzUZlB8KPSukKLbAvYvAgHRlrmeNnKD8uQnP4fW+i4jYHvy92Ewgwda8zFJ082wDlKdBxDKSs3LVw/OsvPtB+Yi3SBFMMYMupGRnw9fEWhAss4KHuofTEouQdPDkISwmd99sR5rsyPxGHXZ5Y/MrkGLhbKomnhlv9DQvdoEL40FPTrjVyjOj3UCphFmCiqW7xYVz2+GKDaZXh4NLvZ302yNI/xaXp4N2Z1o1AvWfWYPziPY9MwFb5J+q63MYGenJstpsInqYEtQrYLuIb/TQc2y4utzwGV4DrQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PWT9eHeqpXQ/9mF9TQphExpj7FHn6BIy0oeAU393Ikg=; b=UeWHBqo5EtfAHA1z8XK7Y7PawxTyjGp2kOJx/FVxsxfB2HvAl0qIbhjzfldbmfRhK3gLqJbQeRS6n6dshVSRfCTjb1x/i9a+frWxz6/XwG3DVOx+5P6nXsGIjPbidJbDgWyR32V1Drq356Ri+xOytpbmaL7S4W+86WOyDCI5KeC0oJ83lVgsClORZhnCrhajNH+1lrPtcPco6QfGobTAqN8bLoEZqy6RqscEFVqRVxlTURRxr2mnQzcOi/OA746+qxJo0Kd5z80ogzGVwA1YIRg7VnV+c7h00WunrqtUGsxsL79WqMrEVcTuRh3SladVU1V/l7nKU8o+1Wogf4sG4w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=PWT9eHeqpXQ/9mF9TQphExpj7FHn6BIy0oeAU393Ikg=; b=dAyHtQINMr3GJSq4s4QDkJy2jBc3RMguus80J3dagQk+hf6nt+XW6uhk1yvV+5SdHoJ0/pAxvL/WXlDdmrdjWZGV3+aivKha6d2NaB3DmP1vo5YiW2AZ7PMPWNAEYX7QSUL8gNjGNR+jSnmvBaOR+aVKxkqFqGmSf+sJmU2SiJJ1gTbU7+goUpC166nAxT/g5GNv0EMvKGtAXisypEK0LyO73B0BlP9dcfB91yENIvoOdtMnvONKexjNFBniTJuOEMzqWbNyFW/OZzjH9hZxlajtT8msoKYyDOdy0lveuDFNfmjCCrnKEjitAcU0gj4j0IViHkfFvN63EobEyyskIQ== Received: from MW4PR03CA0061.namprd03.prod.outlook.com (2603:10b6:303:b6::6) by MN2PR12MB4343.namprd12.prod.outlook.com (2603:10b6:208:26f::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Thu, 13 May 2021 11:13:49 +0000 Received: from CO1NAM11FT048.eop-nam11.prod.protection.outlook.com (2603:10b6:303:b6:cafe::51) by MW4PR03CA0061.outlook.office365.com (2603:10b6:303:b6::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend Transport; Thu, 13 May 2021 11:13:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT048.mail.protection.outlook.com (10.13.175.148) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4129.25 via Frontend Transport; Thu, 13 May 2021 11:13:48 +0000 Received: from nvidia.com (172.20.145.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 13 May 2021 11:13:46 +0000 From: Bing Zhao To: , , CC: , , , Date: Thu, 13 May 2021 14:13:29 +0300 Message-ID: <20210513111329.40040-1-bingz@nvidia.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20210512143607.3982046-1-bingz@nvidia.com> References: <20210512143607.3982046-1-bingz@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.145.6] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 8bddcb43-f744-48fc-11a4-08d916002ea4 X-MS-TrafficTypeDiagnostic: MN2PR12MB4343: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:5797; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ED3JLFdkwVm1AtIDJTjwYADkGf0uCH62bKWukBrHo+5GZP8BlOlJkDmN2IOJcFbU9GD0x6XteDhdKbQysHDrlIfocTiMm1spIGtj51yx6QxxpsUhIsr7MfDirxAMWEsItcnl6mjL0+bkFHDjaEyrCmujOFzLCO0Rili5YiKpo+6yfO+R0u5mGnRX8s+OlLsJjxJ1yedctMyli8T97cOBa/9GN+/FozL0j4aXzt4fT4r2xOM2kUApC0tZvrYSOGC/TIKEASfIgpH0aTiW211cYFK66xLB28sLuHaAS5L/MIhjSIayKpjcM8Wz/KqTvhGaqNcfQZIOh2xt/3Dbl4Va3V7XtGWUSbg5nnefLrpbwKi/LIt584VeVpsTftSVk/h+y0SgapMZluULQIL0O+yBYN4OJ+pgOZjl8CHoqkD1905e4e4T8Nsoxd7qLMElk70LFnM8MT2QkXXnvwmp0vYhlEo6RnaiY4DjaCMqcXtXFs24pumCE+CAnKkeZjofXPtGri06QBIWpwFJsRuXVkg/9acRVC9YyLoNVWUqeYlVWjyIi++G8P/RXO4jMKUFbxvwg/85aKnlPwRXmubhvgCL+fMQIY3LUgUFqsuIHsxT/lRyIn7RQbrso2uSGRkUeyDB6gmBatG/2+y1jt88M9FYqw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(346002)(396003)(376002)(39860400002)(46966006)(36840700001)(55016002)(6286002)(36756003)(1076003)(4326008)(36860700001)(47076005)(5660300002)(36906005)(82740400003)(2906002)(478600001)(86362001)(356005)(110136005)(426003)(54906003)(7696005)(336012)(6666004)(186003)(16526019)(2616005)(316002)(70586007)(70206006)(26005)(82310400003)(83380400001)(8936002)(7636003)(8676002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 11:13:48.6891 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 8bddcb43-f744-48fc-11a4-08d916002ea4 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT048.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4343 Subject: [dpdk-dev] [PATCH v2] net/mlx5: fix loopback for DV queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In the past, all the queues and other hardware objects were created through Verbs interface. Currently, most objects creation are migrated to Devx interface by default, including queues. Only when the DV is disabled by device arg or eswitch is enabled, are all or some of the objects created through Verbs interface. When using Devx interface to create queues, the kernel driver behavior is different from using Verbs. The Tx loopback cannot work properly even if the Tx and Rx queues are configured with loopback attribute. To fix the support self loopback for Tx, a Verbs dummy queue pair needs to be created to trigger the kernel to enable the global loopback capability. This is only required when TIR is created for Rx and loopback is needed. Only CQ and QP are needed for this case, no WQ(RQ) needs to be created. This requirement comes from bugzilla 645, more details can be found in the bugzilla link. Bugzilla ID: 645 Fixes: 6deb19e1b2d2 ("net/mlx5: separate Rx queue object creations") Cc: stable@dpdk.org Signed-off-by: Bing Zhao Acked-by: Viacheslav Ovsiienko --- drivers/net/mlx5/linux/mlx5_verbs.c | 119 ++++++++++++++++++++++++++++ drivers/net/mlx5/linux/mlx5_verbs.h | 2 + drivers/net/mlx5/mlx5.h | 9 +++ drivers/net/mlx5/mlx5_trigger.c | 9 +++ 4 files changed, 139 insertions(+) diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 0b0759f33f..2ca94b5712 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -1055,6 +1055,125 @@ mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx) return -rte_errno; } +/* + * Create the dummy QP with minimal resources for loopback. + * + * @param dev + * Pointer to Ethernet device. + * + * @return + * 0 on success, a negative errno value otherwise and rte_errno is set. + */ +int +mlx5_rxq_ibv_obj_dummy_lb_create(struct rte_eth_dev *dev) +{ +#if defined(HAVE_IBV_DEVICE_TUNNEL_SUPPORT) && defined(HAVE_IBV_FLOW_DV_SUPPORT) + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + struct ibv_context *ctx = sh->ctx; + struct mlx5dv_qp_init_attr qp_init_attr = {0}; + struct { + struct ibv_cq_init_attr_ex ibv; + struct mlx5dv_cq_init_attr mlx5; + } cq_attr = {{0}}; + + if (dev->data->dev_conf.lpbk_mode) { + /* Allow packet sent from NIC loop back w/o source MAC check. */ + qp_init_attr.comp_mask |= + MLX5DV_QP_INIT_ATTR_MASK_QP_CREATE_FLAGS; + qp_init_attr.create_flags |= + MLX5DV_QP_CREATE_TIR_ALLOW_SELF_LOOPBACK_UC; + } else { + return 0; + } + /* Only need to check refcnt, 0 after "sh" is allocated. */ + if (!!(__atomic_fetch_add(&sh->self_lb.refcnt, 1, __ATOMIC_RELAXED))) { + MLX5_ASSERT(sh->self_lb.ibv_cq && sh->self_lb.qp); + priv->lb_used = 1; + return 0; + } + cq_attr.ibv = (struct ibv_cq_init_attr_ex){ + .cqe = 1, + .channel = NULL, + .comp_mask = 0, + }; + cq_attr.mlx5 = (struct mlx5dv_cq_init_attr){ + .comp_mask = 0, + }; + /* Only CQ is needed, no WQ(RQ) is required in this case. */ + sh->self_lb.ibv_cq = mlx5_glue->cq_ex_to_cq(mlx5_glue->dv_create_cq(ctx, + &cq_attr.ibv, + &cq_attr.mlx5)); + if (!sh->self_lb.ibv_cq) { + DRV_LOG(ERR, "Port %u cannot allocate CQ for loopback.", + dev->data->port_id); + rte_errno = errno; + goto error; + } + sh->self_lb.qp = mlx5_glue->dv_create_qp(ctx, + &(struct ibv_qp_init_attr_ex){ + .qp_type = IBV_QPT_RAW_PACKET, + .comp_mask = IBV_QP_INIT_ATTR_PD, + .pd = sh->pd, + .send_cq = sh->self_lb.ibv_cq, + .recv_cq = sh->self_lb.ibv_cq, + .cap.max_recv_wr = 1, + }, + &qp_init_attr); + if (!sh->self_lb.qp) { + DRV_LOG(DEBUG, "Port %u cannot allocate QP for loopback.", + dev->data->port_id); + rte_errno = errno; + goto error; + } + priv->lb_used = 1; + return 0; +error: + if (sh->self_lb.ibv_cq) { + claim_zero(mlx5_glue->destroy_cq(sh->self_lb.ibv_cq)); + sh->self_lb.ibv_cq = NULL; + } + (void)__atomic_sub_fetch(&sh->self_lb.refcnt, 1, __ATOMIC_RELAXED); + return -rte_errno; +#else + RTE_SET_USED(dev); + return 0; +#endif +} + +/* + * Release the dummy queue resources for loopback. + * + * @param dev + * Pointer to Ethernet device. + */ +void +mlx5_rxq_ibv_obj_dummy_lb_release(struct rte_eth_dev *dev) +{ +#if defined(HAVE_IBV_DEVICE_TUNNEL_SUPPORT) && defined(HAVE_IBV_FLOW_DV_SUPPORT) + struct mlx5_priv *priv = dev->data->dev_private; + struct mlx5_dev_ctx_shared *sh = priv->sh; + + if (!priv->lb_used) + return; + MLX5_ASSERT(__atomic_load_n(&sh->self_lb.refcnt, __ATOMIC_RELAXED)); + if (!(__atomic_sub_fetch(&sh->self_lb.refcnt, 1, __ATOMIC_RELAXED))) { + if (sh->self_lb.qp) { + claim_zero(mlx5_glue->destroy_qp(sh->self_lb.qp)); + sh->self_lb.qp = NULL; + } + if (sh->self_lb.ibv_cq) { + claim_zero(mlx5_glue->destroy_cq(sh->self_lb.ibv_cq)); + sh->self_lb.ibv_cq = NULL; + } + } + priv->lb_used = 0; +#else + RTE_SET_USED(dev); + return; +#endif +} + /** * Release an Tx verbs queue object. * diff --git a/drivers/net/mlx5/linux/mlx5_verbs.h b/drivers/net/mlx5/linux/mlx5_verbs.h index 76a79bf4f4..f7e8e2fe98 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.h +++ b/drivers/net/mlx5/linux/mlx5_verbs.h @@ -9,6 +9,8 @@ int mlx5_txq_ibv_obj_new(struct rte_eth_dev *dev, uint16_t idx); void mlx5_txq_ibv_obj_release(struct mlx5_txq_obj *txq_obj); +int mlx5_rxq_ibv_obj_dummy_lb_create(struct rte_eth_dev *dev); +void mlx5_rxq_ibv_obj_dummy_lb_release(struct rte_eth_dev *dev); /* Verbs ops struct */ extern const struct mlx5_mr_ops mlx5_mr_verbs_ops; diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 7eca6a6fa6..ad57a4f5b0 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -287,6 +287,13 @@ struct mlx5_drop { struct mlx5_rxq_obj *rxq; /* Rx queue object. */ }; +/* Loopback dummy queue resources required due to Verbs API. */ +struct mlx5_lb_ctx { + struct ibv_qp *qp; /* QP object. */ + void *ibv_cq; /* Completion queue. */ + uint16_t refcnt; /* Reference count for representors. */ +}; + #define MLX5_COUNTERS_PER_POOL 512 #define MLX5_MAX_PENDING_QUERIES 4 #define MLX5_CNT_CONTAINER_RESIZE 64 @@ -1124,6 +1131,7 @@ struct mlx5_dev_ctx_shared { /* Meter management structure. */ struct mlx5_aso_ct_pools_mng *ct_mng; /* Management data for ASO connection tracking. */ + struct mlx5_lb_ctx self_lb; /* QP to enable self loopback for Devx. */ struct mlx5_dev_shared_port port[]; /* per device port data array. */ }; @@ -1312,6 +1320,7 @@ struct mlx5_priv { unsigned int sampler_en:1; /* Whether support sampler. */ unsigned int mtr_en:1; /* Whether support meter. */ unsigned int mtr_reg_share:1; /* Whether support meter REG_C share. */ + unsigned int lb_used:1; /* Loopback queue is referred to. */ uint16_t domain_id; /* Switch domain identifier. */ uint16_t vport_id; /* Associated VF vport index (if any). */ uint32_t vport_meta_tag; /* Used for vport index match ove VF LAG. */ diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c index eb8c99cd93..32ab90c9b3 100644 --- a/drivers/net/mlx5/mlx5_trigger.c +++ b/drivers/net/mlx5/mlx5_trigger.c @@ -19,6 +19,7 @@ #include "mlx5_tx.h" #include "mlx5_utils.h" #include "rte_pmd_mlx5.h" +#include "mlx5_verbs.h" /** * Stop traffic on Tx queues. @@ -1068,6 +1069,12 @@ mlx5_dev_start(struct rte_eth_dev *dev) dev->data->port_id, strerror(rte_errno)); goto error; } + if (priv->config.devx && priv->config.dv_flow_en && + priv->config.dest_tir) { + ret = mlx5_rxq_ibv_obj_dummy_lb_create(dev); + if (ret) + goto error; + } ret = mlx5_txq_start(dev); if (ret) { DRV_LOG(ERR, "port %u Tx queue allocation failed: %s", @@ -1148,6 +1155,7 @@ mlx5_dev_start(struct rte_eth_dev *dev) mlx5_traffic_disable(dev); mlx5_txq_stop(dev); mlx5_rxq_stop(dev); + mlx5_rxq_ibv_obj_dummy_lb_release(dev); mlx5_txpp_stop(dev); /* Stop last. */ rte_errno = ret; /* Restore rte_errno. */ return -rte_errno; @@ -1186,6 +1194,7 @@ mlx5_dev_stop(struct rte_eth_dev *dev) priv->sh->port[priv->dev_port - 1].devx_ih_port_id = RTE_MAX_ETHPORTS; mlx5_txq_stop(dev); mlx5_rxq_stop(dev); + mlx5_rxq_ibv_obj_dummy_lb_release(dev); mlx5_txpp_stop(dev); return 0; -- 2.27.0