From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 099EFA0C41; Sun, 12 Sep 2021 18:37:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EEB2B41121; Sun, 12 Sep 2021 18:37:27 +0200 (CEST) Received: from NAM04-MW2-obe.outbound.protection.outlook.com (mail-mw2nam08on2089.outbound.protection.outlook.com [40.107.101.89]) by mails.dpdk.org (Postfix) with ESMTP id 5E5A6410F6 for ; Sun, 12 Sep 2021 18:37:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LdvYOq0a+lQ6Sn45BuUSYersccT+MfVnHnEPCjUWxrVknD0MZOti01vgw3p182RR8a5GqAlaC0Bl+6whuY/Y4Fv41ffPHlUsAMJZsRf1fN7uFwCwyyCRGC8usK2tRCbIZoexRfeqld6DpGht19bYOwPAP/OPQsQySGwU1ib7D8j6jQLFrEWkc4nhK0NtPfVfo+VcHnsH1hTeFhsTnDK/24AaV2hdLYwcmDxy/r6hMUE3SraRoTOAXrUpxGLKIV0kZ0nqtr33iYeTb3MRUeLWCbzx3CqDWZqj//pVCtb89iamOvtQN2xAcRd0fbYvk74i77ATuYID3rtZYukKE0A5Rw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=M90CGEh6ZrTOXhxf+5S3KSVY+elW2MxXg4QhSZKC1ko=; b=ao/oJrlAPE1uSHKfzVimLtMo9NRQ4xjGC5t9hYoPIYq9wrzlVbXnA+h/9/BNjuYc8XPZEazIIHI1YOYbZUFityu8b+Be/jK32fKLadRxvXOvK9dhjiheA9FMqFU6JNEPyiI8K2HX6poflFIiGWaQJRuQpNKQPC6j/2ZoP3TGoAN1dFvCb6/zV3habyA2V5j2GTFREqzB0+/K6A/OBecBV0ihmGlODcIoZb/GRNuSVmllTihPv+XSgwyeRB10banSckTHBkaeprlEkkZnAFqSzpv3+J3sEWXfGT8TfC3B/uc/flGrY9KovM/taV/TUkEEFF28YX1NnPlNPnIc6x8b5A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=M90CGEh6ZrTOXhxf+5S3KSVY+elW2MxXg4QhSZKC1ko=; b=if1XCiImKX7BTVOLnvYCdc8sVpPisx+dgEjhY3J+/B27z+xCDQHaKpmM+ll/iacmjWJtMHiM1W3O+WxeEgd+7CikITPOVVjBES6tbXCmx0CyLD8XfxcqQTS8kJEfSGHpqXQL5c9svUHdjRMrU9WPQkGaG+MLJU5PdKwBOR+ypej6v9O+LOnoDZ0ab3UUmWDHJiBIn4ZG+2aIKDo1YOp2KWyy/0Yp828mXYH8QEJ9O4Q+R+WPXEdVHpgHrrQELGhew+Jt7u1nhJjGn4EJ4SIe8zlIT/HFfMLfKR9+VsIE1chXPPEMfIJbY6aMJA7tB/o0cUO3lCgUk9Uv7iT9cqnYqw== Received: from DM5PR16CA0015.namprd16.prod.outlook.com (2603:10b6:3:c0::25) by DM6PR12MB3483.namprd12.prod.outlook.com (2603:10b6:5:11f::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.14; Sun, 12 Sep 2021 16:37:20 +0000 Received: from DM6NAM11FT045.eop-nam11.prod.protection.outlook.com (2603:10b6:3:c0:cafe::57) by DM5PR16CA0015.outlook.office365.com (2603:10b6:3:c0::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.14 via Frontend Transport; Sun, 12 Sep 2021 16:37:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT045.mail.protection.outlook.com (10.13.173.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4500.14 via Frontend Transport; Sun, 12 Sep 2021 16:37:20 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 12 Sep 2021 16:37:19 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sun, 12 Sep 2021 16:37:18 +0000 From: Raja Zidane To: Date: Sun, 12 Sep 2021 16:36:52 +0000 Message-ID: <20210912163652.24983-6-rzidane@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210912163652.24983-1-rzidane@nvidia.com> References: <20210903142157.25617-2-rzidane@nvidia.com> <20210912163652.24983-1-rzidane@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 81494c8b-aab8-4380-b1b9-08d9760b9733 X-MS-TrafficTypeDiagnostic: DM6PR12MB3483: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:167; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rhfuoWrRyWgoffRFbdo9gzl6mnRyp4fAV2RGwztO33zF0QUANPE370lwaLaQWV5Duq6pXMMjz4Gjtejy0FDo1+kcrGi834FlfcxG1JC/AIbF+n12hnKXwfM6pZ0cMdzlWoEFrh8mdFATU7zNqvQVSzqAtSxVXNsDN04mYVMsvFNRb9Z78AAkcEh6/Ak6/ckaPFbyGftyEgzCes7GcpOCi97lUtkbicBw43W9PktXEZ7dd0pOlczKXSUZqtY3cd+OW4VeIa0gTy9k2wWSMS+DeAQD3UDr/CEaco0dhjj7tKlBziI9UIFsdXMRat/K0WL/jJQHcDjBtueZ4Yh122PDIQfwMnN2iZcglGJDJ+UHbrXsyFB/8ZBpIZpZU1byTmv7fRq5y0REpMgWu2BHyTw4ArSkggYw8BUr00E81qoK2T3C0gxVzEr0TwuDmv0SZW63oM1oSFErCV7OJg7DNy6tQ6OnsLTTsBOvvu5iQp4VG0IPIxPTYgX2hfuh/ZuAtp5XpcqI2DmVtwpWx4sqJrVYnlc/Fh2plnSgs7sV13TNu9igXVRL0Q+jJWynyvkK772abWJ1fDeMveQHsEGjCxcrQWqpqSGGVycPoxSm+R4ixEv6Nl4iiEgos9TQkZW9cfbEpO/CMKoaVIdlMSL8I1m4MB7vBr5CKOf4oxi8hY1nHnPW6p4SH8yjP8dlM0+Oa1Rs/63TbC3RXkekP/Do/pjbqw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(508600001)(16526019)(336012)(70586007)(55016002)(356005)(8676002)(82310400003)(36860700001)(30864003)(6916009)(2906002)(6286002)(70206006)(5660300002)(86362001)(47076005)(7636003)(8936002)(7696005)(426003)(36756003)(186003)(316002)(26005)(1076003)(36906005)(2616005)(83380400001)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Sep 2021 16:37:20.2008 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 81494c8b-aab8-4380-b1b9-08d9760b9733 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT045.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3483 Subject: [dpdk-dev] [PATCH V2 5/5] regex/mlx5: refactor HW queue objects X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The mlx5 PMD for regex class uses an MMO WQE operated by the GGA engine in BF devices. Currently, all the MMO WQEs are managed by the SQ object. Starting from BF3, the queue of the MMO WQEs should be connected to the GGA engine using a new configuration, MMO, that will be supported only in the QP object. The FW introduced new capabilities to define whether the MMO configuration should be configured for the GGA queue. Replace all the GGA queue objects to QP, set MMO configuration according to the new FW capabilities. Signed-off-by: Raja Zidane Acked-by: Matan Azrad --- drivers/regex/mlx5/mlx5_regex.c | 7 +- drivers/regex/mlx5/mlx5_regex.h | 16 ++- drivers/regex/mlx5/mlx5_regex_control.c | 65 +++++---- drivers/regex/mlx5/mlx5_regex_fastpath.c | 170 ++++++++++++----------- 4 files changed, 133 insertions(+), 125 deletions(-) diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index f17b6df47f..a3368749b9 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -146,7 +146,8 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) DRV_LOG(ERR, "Unable to read HCA capabilities."); rte_errno = ENOTSUP; goto dev_error; - } else if (!attr.regex || attr.regexp_num_of_engines == 0) { + } else if (((!attr.regex) && (!attr.mmo_regex_sq_en) && + (!attr.mmo_regex_qp_en)) || attr.regexp_num_of_engines == 0) { DRV_LOG(ERR, "Not enough capabilities to support RegEx, maybe " "old FW/OFED version?"); rte_errno = ENOTSUP; @@ -164,7 +165,9 @@ mlx5_regex_dev_probe(struct rte_device *rte_dev) rte_errno = ENOMEM; goto dev_error; } - priv->sq_ts_format = attr.sq_ts_format; + priv->mmo_regex_qp_cap = attr.mmo_regex_qp_en; + priv->mmo_regex_sq_cap = attr.mmo_regex_sq_en; + priv->qp_ts_format = attr.qp_ts_format; priv->ctx = ctx; priv->nb_engines = 2; /* attr.regexp_num_of_engines */ ret = mlx5_devx_regex_register_read(priv->ctx, 0, diff --git a/drivers/regex/mlx5/mlx5_regex.h b/drivers/regex/mlx5/mlx5_regex.h index 514f3408f9..2242d250a3 100644 --- a/drivers/regex/mlx5/mlx5_regex.h +++ b/drivers/regex/mlx5/mlx5_regex.h @@ -17,12 +17,12 @@ #include "mlx5_rxp.h" #include "mlx5_regex_utils.h" -struct mlx5_regex_sq { +struct mlx5_regex_hw_qp { uint16_t log_nb_desc; /* Log 2 number of desc for this object. */ - struct mlx5_devx_sq sq_obj; /* The SQ DevX object. */ + struct mlx5_devx_qp qp_obj; /* The QP DevX object. */ size_t pi, db_pi; size_t ci; - uint32_t sqn; + uint32_t qpn; }; struct mlx5_regex_cq { @@ -34,10 +34,10 @@ struct mlx5_regex_cq { struct mlx5_regex_qp { uint32_t flags; /* QP user flags. */ uint32_t nb_desc; /* Total number of desc for this qp. */ - struct mlx5_regex_sq *sqs; /* Pointer to sq array. */ - uint16_t nb_obj; /* Number of sq objects. */ + struct mlx5_regex_hw_qp *qps; /* Pointer to qp array. */ + uint16_t nb_obj; /* Number of qp objects. */ struct mlx5_regex_cq cq; /* CQ struct. */ - uint32_t free_sqs; + uint32_t free_qps; struct mlx5_regex_job *jobs; struct ibv_mr *metadata; struct ibv_mr *outputs; @@ -73,8 +73,10 @@ struct mlx5_regex_priv { /**< Called by memory event callback. */ struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ uint8_t is_bf2; /* The device is BF2 device. */ - uint8_t sq_ts_format; /* Whether SQ supports timestamp formats. */ + uint8_t qp_ts_format; /* Whether SQ supports timestamp formats. */ uint8_t has_umr; /* The device supports UMR. */ + uint32_t mmo_regex_qp_cap:1; + uint32_t mmo_regex_sq_cap:1; }; #ifdef HAVE_IBV_FLOW_DV_SUPPORT diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c index 8ce2dabb55..572ecc6d86 100644 --- a/drivers/regex/mlx5/mlx5_regex_control.c +++ b/drivers/regex/mlx5/mlx5_regex_control.c @@ -106,12 +106,12 @@ regex_ctrl_create_cq(struct mlx5_regex_priv *priv, struct mlx5_regex_cq *cq) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind) +regex_ctrl_destroy_hw_qp(struct mlx5_regex_qp *qp, uint16_t q_ind) { - struct mlx5_regex_sq *sq = &qp->sqs[q_ind]; + struct mlx5_regex_hw_qp *qp_obj = &qp->qps[q_ind]; - mlx5_devx_sq_destroy(&sq->sq_obj); - memset(sq, 0, sizeof(*sq)); + mlx5_devx_qp_destroy(&qp_obj->qp_obj); + memset(qp, 0, sizeof(*qp)); return 0; } @@ -131,45 +131,44 @@ regex_ctrl_destroy_sq(struct mlx5_regex_qp *qp, uint16_t q_ind) * 0 on success, a negative errno value otherwise and rte_errno is set. */ static int -regex_ctrl_create_sq(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, +regex_ctrl_create_hw_qp(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, uint16_t q_ind, uint16_t log_nb_desc) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT - struct mlx5_devx_create_sq_attr attr = { - .user_index = q_ind, + struct mlx5_devx_qp_attr attr = { .cqn = qp->cq.cq_obj.cq->id, - .wq_attr = (struct mlx5_devx_wq_attr){ - .uar_page = priv->uar->page_id, - }, - .ts_format = mlx5_ts_format_conv(priv->sq_ts_format), - }; - struct mlx5_devx_modify_sq_attr modify_attr = { - .state = MLX5_SQC_STATE_RDY, + .uar_index = priv->uar->page_id, + .ts_format = mlx5_ts_format_conv(priv->qp_ts_format), + .user_index = q_ind, }; - struct mlx5_regex_sq *sq = &qp->sqs[q_ind]; + struct mlx5_regex_hw_qp *qp_obj = &qp->qps[q_ind]; uint32_t pd_num = 0; int ret; - sq->log_nb_desc = log_nb_desc; - sq->sqn = q_ind; - sq->ci = 0; - sq->pi = 0; + qp_obj->log_nb_desc = log_nb_desc; + qp_obj->qpn = q_ind; + qp_obj->ci = 0; + qp_obj->pi = 0; ret = regex_get_pdn(priv->pd, &pd_num); if (ret) return ret; - attr.wq_attr.pd = pd_num; - ret = mlx5_devx_sq_create(priv->ctx, &sq->sq_obj, + attr.pd = pd_num; + attr.rq_size = 0; + attr.sq_size = RTE_BIT32(MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, + log_nb_desc)); + attr.mmo = priv->mmo_regex_qp_cap; + ret = mlx5_devx_qp_create(priv->ctx, &qp_obj->qp_obj, MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, log_nb_desc), &attr, SOCKET_ID_ANY); if (ret) { - DRV_LOG(ERR, "Can't create SQ object."); + DRV_LOG(ERR, "Can't create QP object."); rte_errno = ENOMEM; return -rte_errno; } - ret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr); + ret = mlx5_devx_qp2rts(&qp_obj->qp_obj, 0); if (ret) { - DRV_LOG(ERR, "Can't change SQ state to ready."); - regex_ctrl_destroy_sq(qp, q_ind); + DRV_LOG(ERR, "Can't change QP state to RTS."); + regex_ctrl_destroy_hw_qp(qp, q_ind); rte_errno = ENOMEM; return -rte_errno; } @@ -224,10 +223,10 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, (1 << MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, log_desc)); else qp->nb_obj = 1; - qp->sqs = rte_malloc(NULL, - qp->nb_obj * sizeof(struct mlx5_regex_sq), 64); - if (!qp->sqs) { - DRV_LOG(ERR, "Can't allocate sq array memory."); + qp->qps = rte_malloc(NULL, + qp->nb_obj * sizeof(struct mlx5_regex_hw_qp), 64); + if (!qp->qps) { + DRV_LOG(ERR, "Can't allocate qp array memory."); rte_errno = ENOMEM; return -rte_errno; } @@ -238,9 +237,9 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, goto err_cq; } for (i = 0; i < qp->nb_obj; i++) { - ret = regex_ctrl_create_sq(priv, qp, i, log_desc); + ret = regex_ctrl_create_hw_qp(priv, qp, i, log_desc); if (ret) { - DRV_LOG(ERR, "Can't create sq."); + DRV_LOG(ERR, "Can't create qp object."); goto err_btree; } nb_sq_config++; @@ -266,9 +265,9 @@ mlx5_regex_qp_setup(struct rte_regexdev *dev, uint16_t qp_ind, mlx5_mr_btree_free(&qp->mr_ctrl.cache_bh); err_btree: for (i = 0; i < nb_sq_config; i++) - regex_ctrl_destroy_sq(qp, i); + regex_ctrl_destroy_hw_qp(qp, i); regex_ctrl_destroy_cq(&qp->cq); err_cq: - rte_free(qp->sqs); + rte_free(qp->qps); return ret; } diff --git a/drivers/regex/mlx5/mlx5_regex_fastpath.c b/drivers/regex/mlx5/mlx5_regex_fastpath.c index 786718af53..18b01b6bf9 100644 --- a/drivers/regex/mlx5/mlx5_regex_fastpath.c +++ b/drivers/regex/mlx5/mlx5_regex_fastpath.c @@ -39,13 +39,13 @@ #define MLX5_REGEX_KLMS_SIZE \ ((MLX5_REGEX_MAX_KLM_NUM) * sizeof(struct mlx5_klm)) /* In WQE set mode, the pi should be quarter of the MLX5_REGEX_MAX_WQE_INDEX. */ -#define MLX5_REGEX_UMR_SQ_PI_IDX(pi, ops) \ +#define MLX5_REGEX_UMR_QP_PI_IDX(pi, ops) \ (((pi) + (ops)) & (MLX5_REGEX_MAX_WQE_INDEX >> 2)) static inline uint32_t -sq_size_get(struct mlx5_regex_sq *sq) +qp_size_get(struct mlx5_regex_hw_qp *qp) { - return (1U << sq->log_nb_desc); + return (1U << qp->log_nb_desc); } static inline uint32_t @@ -144,11 +144,11 @@ mlx5_regex_addr2mr(struct mlx5_regex_priv *priv, struct mlx5_mr_ctrl *mr_ctrl, static inline void -__prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_sq *sq, +__prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_hw_qp *qp_obj, struct rte_regex_ops *op, struct mlx5_regex_job *job, size_t pi, struct mlx5_klm *klm) { - size_t wqe_offset = (pi & (sq_size_get(sq) - 1)) * + size_t wqe_offset = (pi & (qp_size_get(qp_obj) - 1)) * (MLX5_SEND_WQE_BB << (priv->has_umr ? 2 : 0)) + (priv->has_umr ? MLX5_REGEX_UMR_WQE_SIZE : 0); uint16_t group0 = op->req_flags & RTE_REGEX_OPS_REQ_GROUP_ID0_VALID_F ? @@ -168,13 +168,13 @@ __prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_sq *sq, RTE_REGEX_OPS_REQ_GROUP_ID2_VALID_F | RTE_REGEX_OPS_REQ_GROUP_ID3_VALID_F))) group0 = op->group_id0; - uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset; + uint8_t *wqe = (uint8_t *)(uintptr_t)qp_obj->qp_obj.wqes + wqe_offset; int ds = 4; /* ctrl + meta + input + output */ set_wqe_ctrl_seg((struct mlx5_wqe_ctrl_seg *)wqe, (priv->has_umr ? (pi * 4 + 3) : pi), MLX5_OPCODE_MMO, MLX5_OPC_MOD_MMO_REGEX, - sq->sq_obj.sq->id, 0, ds, 0, 0); + qp_obj->qp_obj.qp->id, 0, ds, 0, 0); set_regex_ctrl_seg(wqe + 12, 0, group0, group1, group2, group3, control); struct mlx5_wqe_data_seg *input_seg = @@ -188,7 +188,7 @@ __prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_sq *sq, static inline void prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, - struct mlx5_regex_sq *sq, struct rte_regex_ops *op, + struct mlx5_regex_hw_qp *qp_obj, struct rte_regex_ops *op, struct mlx5_regex_job *job) { struct mlx5_klm klm; @@ -196,42 +196,42 @@ prep_one(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, klm.byte_count = rte_pktmbuf_data_len(op->mbuf); klm.mkey = mlx5_regex_addr2mr(priv, &qp->mr_ctrl, op->mbuf); klm.address = rte_pktmbuf_mtod(op->mbuf, uintptr_t); - __prep_one(priv, sq, op, job, sq->pi, &klm); - sq->db_pi = sq->pi; - sq->pi = (sq->pi + 1) & MLX5_REGEX_MAX_WQE_INDEX; + __prep_one(priv, qp_obj, op, job, qp_obj->pi, &klm); + qp_obj->db_pi = qp_obj->pi; + qp_obj->pi = (qp_obj->pi + 1) & MLX5_REGEX_MAX_WQE_INDEX; } static inline void -send_doorbell(struct mlx5_regex_priv *priv, struct mlx5_regex_sq *sq) +send_doorbell(struct mlx5_regex_priv *priv, struct mlx5_regex_hw_qp *qp_obj) { struct mlx5dv_devx_uar *uar = priv->uar; - size_t wqe_offset = (sq->db_pi & (sq_size_get(sq) - 1)) * + size_t wqe_offset = (qp_obj->db_pi & (qp_size_get(qp_obj) - 1)) * (MLX5_SEND_WQE_BB << (priv->has_umr ? 2 : 0)) + (priv->has_umr ? MLX5_REGEX_UMR_WQE_SIZE : 0); - uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes + wqe_offset; + uint8_t *wqe = (uint8_t *)(uintptr_t)qp_obj->qp_obj.wqes + wqe_offset; /* Or the fm_ce_se instead of set, avoid the fence be cleared. */ ((struct mlx5_wqe_ctrl_seg *)wqe)->fm_ce_se |= MLX5_WQE_CTRL_CQ_UPDATE; uint64_t *doorbell_addr = (uint64_t *)((uint8_t *)uar->base_addr + 0x800); rte_io_wmb(); - sq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32((priv->has_umr ? - (sq->db_pi * 4 + 3) : sq->db_pi) & - MLX5_REGEX_MAX_WQE_INDEX); + qp_obj->qp_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32((priv->has_umr ? + (qp_obj->db_pi * 4 + 3) : qp_obj->db_pi) + & MLX5_REGEX_MAX_WQE_INDEX); rte_wmb(); *doorbell_addr = *(volatile uint64_t *)wqe; rte_wmb(); } static inline int -get_free(struct mlx5_regex_sq *sq, uint8_t has_umr) { - return (sq_size_get(sq) - ((sq->pi - sq->ci) & +get_free(struct mlx5_regex_hw_qp *qp, uint8_t has_umr) { + return (qp_size_get(qp) - ((qp->pi - qp->ci) & (has_umr ? (MLX5_REGEX_MAX_WQE_INDEX >> 2) : MLX5_REGEX_MAX_WQE_INDEX))); } static inline uint32_t -job_id_get(uint32_t qid, size_t sq_size, size_t index) { - return qid * sq_size + (index & (sq_size - 1)); +job_id_get(uint32_t qid, size_t qp_size, size_t index) { + return qid * qp_size + (index & (qp_size - 1)); } #ifdef HAVE_MLX5_UMR_IMKEY @@ -242,14 +242,14 @@ mkey_klm_available(struct mlx5_klm *klm, uint32_t pos, uint32_t new) } static inline void -complete_umr_wqe(struct mlx5_regex_qp *qp, struct mlx5_regex_sq *sq, +complete_umr_wqe(struct mlx5_regex_qp *qp, struct mlx5_regex_hw_qp *qp_obj, struct mlx5_regex_job *mkey_job, size_t umr_index, uint32_t klm_size, uint32_t total_len) { - size_t wqe_offset = (umr_index & (sq_size_get(sq) - 1)) * + size_t wqe_offset = (umr_index & (qp_size_get(qp_obj) - 1)) * (MLX5_SEND_WQE_BB * 4); struct mlx5_wqe_ctrl_seg *wqe = (struct mlx5_wqe_ctrl_seg *)((uint8_t *) - (uintptr_t)sq->sq_obj.wqes + wqe_offset); + (uintptr_t)qp_obj->qp_obj.wqes + wqe_offset); struct mlx5_wqe_umr_ctrl_seg *ucseg = (struct mlx5_wqe_umr_ctrl_seg *)(wqe + 1); struct mlx5_wqe_mkey_context_seg *mkc = @@ -260,7 +260,7 @@ complete_umr_wqe(struct mlx5_regex_qp *qp, struct mlx5_regex_sq *sq, memset(wqe, 0, MLX5_REGEX_UMR_WQE_SIZE); /* Set WQE control seg. Non-inline KLM UMR WQE size must be 9 WQE_DS. */ set_wqe_ctrl_seg(wqe, (umr_index * 4), MLX5_OPCODE_UMR, - 0, sq->sq_obj.sq->id, 0, 9, 0, + 0, qp_obj->qp_obj.qp->id, 0, 9, 0, rte_cpu_to_be_32(mkey_job->imkey->id)); /* Set UMR WQE control seg. */ ucseg->mkey_mask |= rte_cpu_to_be_64(MLX5_WQE_UMR_CTRL_MKEY_MASK_LEN | @@ -287,36 +287,36 @@ complete_umr_wqe(struct mlx5_regex_qp *qp, struct mlx5_regex_sq *sq, } static inline void -prep_nop_regex_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_sq *sq, - struct rte_regex_ops *op, struct mlx5_regex_job *job, - size_t pi, struct mlx5_klm *klm) +prep_nop_regex_wqe_set(struct mlx5_regex_priv *priv, + struct mlx5_regex_hw_qp *qp, struct rte_regex_ops *op, + struct mlx5_regex_job *job, size_t pi, struct mlx5_klm *klm) { - size_t wqe_offset = (pi & (sq_size_get(sq) - 1)) * + size_t wqe_offset = (pi & (qp_size_get(qp) - 1)) * (MLX5_SEND_WQE_BB << 2); struct mlx5_wqe_ctrl_seg *wqe = (struct mlx5_wqe_ctrl_seg *)((uint8_t *) - (uintptr_t)sq->sq_obj.wqes + wqe_offset); + (uintptr_t)qp->qp_obj.wqes + wqe_offset); /* Clear the WQE memory used as UMR WQE previously. */ if ((rte_be_to_cpu_32(wqe->opmod_idx_opcode) & 0xff) != MLX5_OPCODE_NOP) memset(wqe, 0, MLX5_REGEX_UMR_WQE_SIZE); /* UMR WQE size is 9 DS, align nop WQE to 3 WQEBBS(12 DS). */ - set_wqe_ctrl_seg(wqe, pi * 4, MLX5_OPCODE_NOP, 0, sq->sq_obj.sq->id, + set_wqe_ctrl_seg(wqe, pi * 4, MLX5_OPCODE_NOP, 0, qp->qp_obj.qp->id, 0, 12, 0, 0); - __prep_one(priv, sq, op, job, pi, klm); + __prep_one(priv, qp, op, job, pi, klm); } static inline void prep_regex_umr_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, - struct mlx5_regex_sq *sq, struct rte_regex_ops **op, size_t nb_ops) + struct mlx5_regex_hw_qp *qp_obj, struct rte_regex_ops **op, + size_t nb_ops) { struct mlx5_regex_job *job = NULL; - size_t sqid = sq->sqn, mkey_job_id = 0; + size_t hw_qpid = qp_obj->qpn, mkey_job_id = 0; size_t left_ops = nb_ops; uint32_t klm_num = 0, len; struct mlx5_klm *mkey_klm = NULL; struct mlx5_klm klm; - sqid = sq->sqn; while (left_ops--) rte_prefetch0(op[left_ops]); left_ops = nb_ops; @@ -328,7 +328,7 @@ prep_regex_umr_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, */ while (left_ops--) { struct rte_mbuf *mbuf = op[left_ops]->mbuf; - size_t pi = MLX5_REGEX_UMR_SQ_PI_IDX(sq->pi, left_ops); + size_t pi = MLX5_REGEX_UMR_QP_PI_IDX(qp_obj->pi, left_ops); if (mbuf->nb_segs > 1) { size_t scatter_size = 0; @@ -340,16 +340,16 @@ prep_regex_umr_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, * WQE in the next WQE set. */ if (mkey_klm) - complete_umr_wqe(qp, sq, + complete_umr_wqe(qp, qp_obj, &qp->jobs[mkey_job_id], - MLX5_REGEX_UMR_SQ_PI_IDX(pi, 1), + MLX5_REGEX_UMR_QP_PI_IDX(pi, 1), klm_num, len); /* * Get the indircet mkey and KLM array index * from the last WQE set. */ - mkey_job_id = job_id_get(sqid, - sq_size_get(sq), pi); + mkey_job_id = job_id_get(hw_qpid, + qp_size_get(qp_obj), pi); mkey_klm = qp->jobs[mkey_job_id].imkey_array; klm_num = 0; len = 0; @@ -383,22 +383,23 @@ prep_regex_umr_wqe_set(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, klm.address = rte_pktmbuf_mtod(mbuf, uintptr_t); klm.byte_count = rte_pktmbuf_data_len(mbuf); } - job = &qp->jobs[job_id_get(sqid, sq_size_get(sq), pi)]; + job = &qp->jobs[job_id_get(hw_qpid, qp_size_get(qp_obj), pi)]; /* * Build the nop + RegEx WQE set by default. The fist nop WQE * will be updated later as UMR WQE if scattered mubf exist. */ - prep_nop_regex_wqe_set(priv, sq, op[left_ops], job, pi, &klm); + prep_nop_regex_wqe_set(priv, qp_obj, op[left_ops], job, pi, + &klm); } /* * Scattered mbuf have been added to the KLM array. Complete the build * of UMR WQE, update the first nop WQE as UMR WQE. */ if (mkey_klm) - complete_umr_wqe(qp, sq, &qp->jobs[mkey_job_id], sq->pi, + complete_umr_wqe(qp, qp_obj, &qp->jobs[mkey_job_id], qp_obj->pi, klm_num, len); - sq->db_pi = MLX5_REGEX_UMR_SQ_PI_IDX(sq->pi, nb_ops - 1); - sq->pi = MLX5_REGEX_UMR_SQ_PI_IDX(sq->pi, nb_ops); + qp_obj->db_pi = MLX5_REGEX_UMR_QP_PI_IDX(qp_obj->pi, nb_ops - 1); + qp_obj->pi = MLX5_REGEX_UMR_QP_PI_IDX(qp_obj->pi, nb_ops); } uint16_t @@ -407,21 +408,22 @@ mlx5_regexdev_enqueue_gga(struct rte_regexdev *dev, uint16_t qp_id, { struct mlx5_regex_priv *priv = dev->data->dev_private; struct mlx5_regex_qp *queue = &priv->qps[qp_id]; - struct mlx5_regex_sq *sq; - size_t sqid, nb_left = nb_ops, nb_desc; + struct mlx5_regex_hw_qp *qp_obj; + size_t hw_qpid, nb_left = nb_ops, nb_desc; - while ((sqid = ffs(queue->free_sqs))) { - sqid--; /* ffs returns 1 for bit 0 */ - sq = &queue->sqs[sqid]; - nb_desc = get_free(sq, priv->has_umr); + while ((hw_qpid = ffs(queue->free_qps))) { + hw_qpid--; /* ffs returns 1 for bit 0 */ + qp_obj = &queue->qps[hw_qpid]; + nb_desc = get_free(qp_obj, priv->has_umr); if (nb_desc) { /* The ops be handled can't exceed nb_ops. */ if (nb_desc > nb_left) nb_desc = nb_left; else - queue->free_sqs &= ~(1 << sqid); - prep_regex_umr_wqe_set(priv, queue, sq, ops, nb_desc); - send_doorbell(priv, sq); + queue->free_qps &= ~(1 << hw_qpid); + prep_regex_umr_wqe_set(priv, queue, qp_obj, ops, + nb_desc); + send_doorbell(priv, qp_obj); nb_left -= nb_desc; } if (!nb_left) @@ -440,23 +442,25 @@ mlx5_regexdev_enqueue(struct rte_regexdev *dev, uint16_t qp_id, { struct mlx5_regex_priv *priv = dev->data->dev_private; struct mlx5_regex_qp *queue = &priv->qps[qp_id]; - struct mlx5_regex_sq *sq; - size_t sqid, job_id, i = 0; - - while ((sqid = ffs(queue->free_sqs))) { - sqid--; /* ffs returns 1 for bit 0 */ - sq = &queue->sqs[sqid]; - while (get_free(sq, priv->has_umr)) { - job_id = job_id_get(sqid, sq_size_get(sq), sq->pi); - prep_one(priv, queue, sq, ops[i], &queue->jobs[job_id]); + struct mlx5_regex_hw_qp *qp_obj; + size_t hw_qpid, job_id, i = 0; + + while ((hw_qpid = ffs(queue->free_qps))) { + hw_qpid--; /* ffs returns 1 for bit 0 */ + qp_obj = &queue->qps[hw_qpid]; + while (get_free(qp_obj, priv->has_umr)) { + job_id = job_id_get(hw_qpid, qp_size_get(qp_obj), + qp_obj->pi); + prep_one(priv, queue, qp_obj, ops[i], + &queue->jobs[job_id]); i++; if (unlikely(i == nb_ops)) { - send_doorbell(priv, sq); + send_doorbell(priv, qp_obj); goto out; } } - queue->free_sqs &= ~(1 << sqid); - send_doorbell(priv, sq); + queue->free_qps &= ~(1 << hw_qpid); + send_doorbell(priv, qp_obj); } out: @@ -566,21 +570,21 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id, uint16_t wq_counter = (rte_be_to_cpu_16(cqe->wqe_counter) + 1) & MLX5_REGEX_MAX_WQE_INDEX; - size_t sqid = cqe->rsvd3[2]; - struct mlx5_regex_sq *sq = &queue->sqs[sqid]; + size_t hw_qpid = cqe->rsvd3[2]; + struct mlx5_regex_hw_qp *qp_obj = &queue->qps[hw_qpid]; /* UMR mode WQE counter move as WQE set(4 WQEBBS).*/ if (priv->has_umr) wq_counter >>= 2; - while (sq->ci != wq_counter) { + while (qp_obj->ci != wq_counter) { if (unlikely(i == nb_ops)) { /* Return without updating cq->ci */ goto out; } - uint32_t job_id = job_id_get(sqid, sq_size_get(sq), - sq->ci); + uint32_t job_id = job_id_get(hw_qpid, + qp_size_get(qp_obj), qp_obj->ci); extract_result(ops[i], &queue->jobs[job_id]); - sq->ci = (sq->ci + 1) & (priv->has_umr ? + qp_obj->ci = (qp_obj->ci + 1) & (priv->has_umr ? (MLX5_REGEX_MAX_WQE_INDEX >> 2) : MLX5_REGEX_MAX_WQE_INDEX); i++; @@ -588,7 +592,7 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id, cq->ci = (cq->ci + 1) & 0xffffff; rte_wmb(); cq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->ci); - queue->free_sqs |= (1 << sqid); + queue->free_qps |= (1 << hw_qpid); } out: @@ -597,15 +601,15 @@ mlx5_regexdev_dequeue(struct rte_regexdev *dev, uint16_t qp_id, } static void -setup_sqs(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *queue) +setup_qps(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *queue) { - size_t sqid, entry; + size_t hw_qpid, entry; uint32_t job_id; - for (sqid = 0; sqid < queue->nb_obj; sqid++) { - struct mlx5_regex_sq *sq = &queue->sqs[sqid]; - uint8_t *wqe = (uint8_t *)(uintptr_t)sq->sq_obj.wqes; - for (entry = 0 ; entry < sq_size_get(sq); entry++) { - job_id = sqid * sq_size_get(sq) + entry; + for (hw_qpid = 0; hw_qpid < queue->nb_obj; hw_qpid++) { + struct mlx5_regex_hw_qp *qp_obj = &queue->qps[hw_qpid]; + uint8_t *wqe = (uint8_t *)(uintptr_t)qp_obj->qp_obj.wqes; + for (entry = 0 ; entry < qp_size_get(qp_obj); entry++) { + job_id = hw_qpid * qp_size_get(qp_obj) + entry; struct mlx5_regex_job *job = &queue->jobs[job_id]; /* Fill UMR WQE with NOP in advanced. */ @@ -613,7 +617,7 @@ setup_sqs(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *queue) set_wqe_ctrl_seg ((struct mlx5_wqe_ctrl_seg *)wqe, entry * 2, MLX5_OPCODE_NOP, 0, - sq->sq_obj.sq->id, 0, 12, 0, 0); + qp_obj->qp_obj.qp->id, 0, 12, 0, 0); wqe += MLX5_REGEX_UMR_WQE_SIZE; } set_metadata_seg((struct mlx5_wqe_metadata_seg *) @@ -627,7 +631,7 @@ setup_sqs(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *queue) (uintptr_t)job->output); wqe += 64; } - queue->free_sqs |= 1 << sqid; + queue->free_qps |= 1 << hw_qpid; } } @@ -737,7 +741,7 @@ mlx5_regexdev_setup_fastpath(struct mlx5_regex_priv *priv, uint32_t qp_id) return err; } - setup_sqs(priv, qp); + setup_qps(priv, qp); if (priv->has_umr) { #ifdef HAVE_IBV_FLOW_DV_SUPPORT -- 2.17.1