From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3751A0C4C; Tue, 28 Sep 2021 14:17:34 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE6FE410F7; Tue, 28 Sep 2021 14:17:23 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2074.outbound.protection.outlook.com [40.107.243.74]) by mails.dpdk.org (Postfix) with ESMTP id CEB8E410DF for ; Tue, 28 Sep 2021 14:17:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cDT7rjnVB0q2z/hoFmcjgsUUfX7U6l5lt1nd7PzPRX1nIYT2U7tYGVxpWG6mOYje2l5WQ7X+FyO8H33wzP0qGg9oeTcDrs3YGqo9ksWwP4zZ2chmGvgXSDFaZ2GCqUcHoPUiB9xA2SGXtBsVNEdwO1WMISqOnD0xFyqqD3rLnjsOc+dbiJi43tJhVe+U/TZX9TIQQhEVZ5vQjVm0Tv0oA+5/P/iEvNzimjlX6YjPTxA8Nr9phGhIeWczdY2/svc9wi+huvDBDZgMXfmhRfTsp4PQSv9dTzhHOQ86UTcT/PYgUFmyyhLGIk8t/vxx4ef61t+eY2ySKsHyctODz9J4Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=Gs2TSgND6mZH27n9ByLODRl5rTlv1Fi36nfZqwTpycA=; b=kD2bAT0SLlhN2bHx7mLSs1YY/6zPfthCmxmGh3gdRGbF9bB8ZiaKjQ0uQQG87keW97P12Yn8dC/0pO8x9WzWv+PjsjGryZjOTFVX6VxhYVYl9xKWvcGsdhQ+Yo/4IxgCbkTdxO7Tv+02mHxzgmV7DPzkMfh8gd1ZjPT9FSsCDCYcPjjVPaeiF8YwWPKksETUDfp6j6QS3yUhOxQYijCgGhIBXFA2hxb2dbxjkWeSg/y0BTRbbIG5ROBII+iUd40zWEy0m8I8MernmeaR93O4sw3Oge3UpkDzGMUuA3GIhVa6O+wEYRAU0R9FnmnuVfYJ/zPvvbh899XBZJODKocVXQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.36) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Gs2TSgND6mZH27n9ByLODRl5rTlv1Fi36nfZqwTpycA=; b=RxkblJGK/dc9lFQJhP//w/KFlf+ryN77Lgjbu7+ap0zIAn0gcCFMKlUhQYm3DxNcezSbykZghyF+M93kLFxH57fnHuXN1O7QnkQ1IkatN3JMMpMKawV0ATc4QrlD7UW6NqIsf758QxXWl3SWIpg9cY58LpDzPqRUxAExvIVhIBSqy3D7nkfgX+/FHMzV3CyH/Y0pUrmVPJ29BCi196cqcSSLrT/oZKUs03XsgN/TMv1PFIVsxZSNvOwh7m9pG2S7um+2s1fp1R/dplHjFa73wC4qrRUzmIwscc08m6ur8dBqg0W140N9LN7ABr3pkrLz73jUoo39JgJ2f+auL/XKHg== Received: from BN9P222CA0007.NAMP222.PROD.OUTLOOK.COM (2603:10b6:408:10c::12) by DM6PR12MB4617.namprd12.prod.outlook.com (2603:10b6:5:35::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.14; Tue, 28 Sep 2021 12:17:12 +0000 Received: from BN8NAM11FT053.eop-nam11.prod.protection.outlook.com (2603:10b6:408:10c:cafe::a6) by BN9P222CA0007.outlook.office365.com (2603:10b6:408:10c::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4544.15 via Frontend Transport; Tue, 28 Sep 2021 12:17:12 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.36) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.36 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.36; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.36) by BN8NAM11FT053.mail.protection.outlook.com (10.13.177.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4544.13 via Frontend Transport; Tue, 28 Sep 2021 12:17:12 +0000 Received: from DRHQMAIL107.nvidia.com (10.27.9.16) by HQMAIL101.nvidia.com (172.20.187.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 28 Sep 2021 12:17:11 +0000 Received: from nvidia.com (172.20.187.6) by DRHQMAIL107.nvidia.com (10.27.9.16) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 28 Sep 2021 12:17:10 +0000 From: Raja Zidane To: Date: Tue, 28 Sep 2021 12:16:49 +0000 Message-ID: <20210928121650.40181-5-rzidane@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210928121650.40181-1-rzidane@nvidia.com> References: <20210915000504.5660-1-rzidane@nvidia.com> <20210928121650.40181-1-rzidane@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To DRHQMAIL107.nvidia.com (10.27.9.16) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c16dca7a-2251-4515-c766-08d98279e6c9 X-MS-TrafficTypeDiagnostic: DM6PR12MB4617: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2089; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QEAtX+5RZ8UQE1cIZ6AmAlQvLkaNH+lElkhB6ntBcQ02iaaji24QrdwTU4BKPJQPD0W7Sw4q4h5ilcNd2Sbmjyps25fVVgdbJGHZaje5S/5Yl7UkByd3ZWAYf1t/L60ywgfoyS14hAH97xMzVKFw9F82U6Af7EYLb/VzLD5xJQZlDemy1GGeeI3k9X4NvfI3mU1Y3XkksQc4f+qB5Tq5oxgftRZRgQh3xB2OP+x0fhJg8pxVhmOVo/ln3+93CGWeJQ0d4u3i7c3EeUpyGAfBuhVZZJVMhVtp8+2QJBWFsjnypsx4zvwAFobATshIFV2O1amXTTu+DFgklpKj9pudl4H1or1XO5zx1d7M8NunDMIeDvLT73SH5jvmPa1SEmDCow7Yu1DGy8KS1rXdKDAjGw+/dnf6Isdr28hNjFAjjwy5iodJEyYXMtQpZAKEPFZ43qSVgSZHiXo3QOMeYH0brSqvDYjBGgqtgCxpSGnMEBWLi949b373XjNRDUDoWiB+maiEgJo1up5Al4WLrZ4cChqQn0nK1yQDDu5r0GsUSDBRqWmHDmwir+HdUEPznc6qv5M2qC2WQNUkGgMqNkjndwdTDtoyCrAoq47KDWZaNCZKC3ewZZCOMFkgUks7TJqx9hAQ77x3+R2nOhIQdJzXlZ9mKNveebZj4Liea5WQ5W+DyRNwp3EhcR+BNluYOSycg3vDPU7CUnkJLPSr724jKA== X-Forefront-Antispam-Report: CIP:216.228.112.36; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid05.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(36906005)(36756003)(70206006)(316002)(8936002)(36860700001)(6916009)(86362001)(70586007)(83380400001)(16526019)(186003)(26005)(82310400003)(47076005)(7696005)(1076003)(2616005)(2906002)(336012)(8676002)(5660300002)(356005)(508600001)(6286002)(426003)(55016002)(7636003)(6666004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Sep 2021 12:17:12.2563 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c16dca7a-2251-4515-c766-08d98279e6c9 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.36]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT053.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4617 Subject: [dpdk-dev] [PATCH V4 4/5] compress/mlx5: refactor queue HW object X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The mlx5 PMD for compress class uses an MMO WQE operated by the GGA engine in BF devices. Currently, all the MMO WQEs are managed by the SQ object. Starting from BF3, the queue of the MMO WQEs should be connected to the GGA engine using a new configuration, MMO, that will be supported only in the QP object. The FW introduced new capabilities to define whether the MMO configuration should be configured for the GGA queue. Replace all the GGA queue objects to QP, set MMO configuration according to the new FW capabilities. Signed-off-by: Raja Zidane Acked-by: Matan Azrad --- drivers/compress/mlx5/mlx5_compress.c | 73 +++++++++++++++------------ 1 file changed, 42 insertions(+), 31 deletions(-) diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index 1e03030510..5c5aa87a18 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -40,7 +40,7 @@ struct mlx5_compress_priv { void *uar; uint32_t pdn; /* Protection Domain number. */ uint8_t min_block_size; - uint8_t sq_ts_format; /* Whether SQ supports timestamp formats. */ + uint8_t qp_ts_format; /* Whether SQ supports timestamp formats. */ /* Minimum huffman block size supported by the device. */ struct ibv_pd *pd; struct rte_compressdev_config dev_config; @@ -48,6 +48,13 @@ struct mlx5_compress_priv { rte_spinlock_t xform_sl; struct mlx5_mr_share_cache mr_scache; /* Global shared MR cache. */ volatile uint64_t *uar_addr; + /* HCA caps*/ + uint32_t mmo_decomp_sq:1; + uint32_t mmo_decomp_qp:1; + uint32_t mmo_comp_sq:1; + uint32_t mmo_comp_qp:1; + uint32_t mmo_dma_sq:1; + uint32_t mmo_dma_qp:1; #ifndef RTE_ARCH_64 rte_spinlock_t uar32_sl; #endif /* RTE_ARCH_64 */ @@ -61,7 +68,7 @@ struct mlx5_compress_qp { struct mlx5_mr_ctrl mr_ctrl; int socket_id; struct mlx5_devx_cq cq; - struct mlx5_devx_sq sq; + struct mlx5_devx_qp qp; struct mlx5_pmd_mr opaque_mr; struct rte_comp_op **ops; struct mlx5_compress_priv *priv; @@ -134,8 +141,8 @@ mlx5_compress_qp_release(struct rte_compressdev *dev, uint16_t qp_id) { struct mlx5_compress_qp *qp = dev->data->queue_pairs[qp_id]; - if (qp->sq.sq != NULL) - mlx5_devx_sq_destroy(&qp->sq); + if (qp->qp.qp != NULL) + mlx5_devx_qp_destroy(&qp->qp); if (qp->cq.cq != NULL) mlx5_devx_cq_destroy(&qp->cq); if (qp->opaque_mr.obj != NULL) { @@ -152,12 +159,12 @@ mlx5_compress_qp_release(struct rte_compressdev *dev, uint16_t qp_id) } static void -mlx5_compress_init_sq(struct mlx5_compress_qp *qp) +mlx5_compress_init_qp(struct mlx5_compress_qp *qp) { volatile struct mlx5_gga_wqe *restrict wqe = - (volatile struct mlx5_gga_wqe *)qp->sq.wqes; + (volatile struct mlx5_gga_wqe *)qp->qp.wqes; volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; - const uint32_t sq_ds = rte_cpu_to_be_32((qp->sq.sq->id << 8) | 4u); + const uint32_t sq_ds = rte_cpu_to_be_32((qp->qp.qp->id << 8) | 4u); const uint32_t flags = RTE_BE32(MLX5_COMP_ALWAYS << MLX5_COMP_MODE_OFFSET); const uint32_t opaq_lkey = rte_cpu_to_be_32(qp->opaque_mr.lkey); @@ -182,15 +189,10 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, struct mlx5_devx_cq_attr cq_attr = { .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar), }; - struct mlx5_devx_create_sq_attr sq_attr = { + struct mlx5_devx_qp_attr qp_attr = { + .pd = priv->pdn, + .uar_index = mlx5_os_get_devx_uar_page_id(priv->uar), .user_index = qp_id, - .wq_attr = (struct mlx5_devx_wq_attr){ - .pd = priv->pdn, - .uar_page = mlx5_os_get_devx_uar_page_id(priv->uar), - }, - }; - struct mlx5_devx_modify_sq_attr modify_attr = { - .state = MLX5_SQC_STATE_RDY, }; uint32_t log_ops_n = rte_log2_u32(max_inflight_ops); uint32_t alloc_size = sizeof(*qp); @@ -242,24 +244,26 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create CQ."); goto err; } - sq_attr.cqn = qp->cq.cq->id; - sq_attr.ts_format = mlx5_ts_format_conv(priv->sq_ts_format); - ret = mlx5_devx_sq_create(priv->ctx, &qp->sq, log_ops_n, &sq_attr, + qp_attr.cqn = qp->cq.cq->id; + qp_attr.ts_format = mlx5_ts_format_conv(priv->qp_ts_format); + qp_attr.rq_size = 0; + qp_attr.sq_size = RTE_BIT32(log_ops_n); + qp_attr.mmo = priv->mmo_decomp_qp && priv->mmo_comp_qp + && priv->mmo_dma_qp; + ret = mlx5_devx_qp_create(priv->ctx, &qp->qp, log_ops_n, &qp_attr, socket_id); if (ret != 0) { - DRV_LOG(ERR, "Failed to create SQ."); + DRV_LOG(ERR, "Failed to create QP."); goto err; } - mlx5_compress_init_sq(qp); - ret = mlx5_devx_cmd_modify_sq(qp->sq.sq, &modify_attr); - if (ret != 0) { - DRV_LOG(ERR, "Can't change SQ state to ready."); + mlx5_compress_init_qp(qp); + ret = mlx5_devx_qp2rts(&qp->qp, 0); + if (ret) goto err; - } /* Save pointer of global generation number to check memory event. */ qp->mr_ctrl.dev_gen_ptr = &priv->mr_scache.dev_gen; DRV_LOG(INFO, "QP %u: SQN=0x%X CQN=0x%X entries num = %u", - (uint32_t)qp_id, qp->sq.sq->id, qp->cq.cq->id, qp->entries_n); + (uint32_t)qp_id, qp->qp.qp->id, qp->cq.cq->id, qp->entries_n); return 0; err: mlx5_compress_qp_release(dev, qp_id); @@ -508,7 +512,7 @@ mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, { struct mlx5_compress_qp *qp = queue_pair; volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) - qp->sq.wqes, *wqe; + qp->qp.wqes, *wqe; struct mlx5_compress_xform *xform; struct rte_comp_op *op; uint16_t mask = qp->entries_n - 1; @@ -563,7 +567,7 @@ mlx5_compress_enqueue_burst(void *queue_pair, struct rte_comp_op **ops, } while (--remain); qp->stats.enqueued_count += nb_ops; rte_io_wmb(); - qp->sq.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); + qp->qp.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(qp->pi); rte_wmb(); mlx5_compress_uar_write(*(volatile uint64_t *)wqe, qp->priv); rte_wmb(); @@ -598,7 +602,7 @@ mlx5_compress_cqe_err_handle(struct mlx5_compress_qp *qp, volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *) &qp->cq.cqes[idx]; volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *) - qp->sq.wqes; + qp->qp.wqes; volatile struct mlx5_gga_compress_opaque *opaq = qp->opaque_mr.addr; op->status = RTE_COMP_OP_STATUS_ERROR; @@ -813,8 +817,9 @@ mlx5_compress_dev_probe(struct rte_device *dev) return -rte_errno; } if (mlx5_devx_cmd_query_hca_attr(ctx, &att) != 0 || - att.mmo_compress_sq_en == 0 || att.mmo_decompress_sq_en == 0 || - att.mmo_dma_sq_en == 0) { + ((att.mmo_compress_sq_en == 0 || att.mmo_decompress_sq_en == 0 || + att.mmo_dma_sq_en == 0) && (att.mmo_compress_qp_en == 0 || + att.mmo_decompress_qp_en == 0 || att.mmo_dma_qp_en == 0))) { DRV_LOG(ERR, "Not enough capabilities to support compress " "operations, maybe old FW/OFED version?"); claim_zero(mlx5_glue->close_device(ctx)); @@ -835,10 +840,16 @@ mlx5_compress_dev_probe(struct rte_device *dev) cdev->enqueue_burst = mlx5_compress_enqueue_burst; cdev->feature_flags = RTE_COMPDEV_FF_HW_ACCELERATED; priv = cdev->data->dev_private; + priv->mmo_decomp_sq = att.mmo_decompress_sq_en; + priv->mmo_decomp_qp = att.mmo_decompress_qp_en; + priv->mmo_comp_sq = att.mmo_compress_sq_en; + priv->mmo_comp_qp = att.mmo_compress_qp_en; + priv->mmo_dma_sq = att.mmo_dma_sq_en; + priv->mmo_dma_qp = att.mmo_dma_qp_en; priv->ctx = ctx; priv->cdev = cdev; priv->min_block_size = att.compress_min_block_size; - priv->sq_ts_format = att.sq_ts_format; + priv->qp_ts_format = att.qp_ts_format; if (mlx5_compress_hw_global_prepare(priv) != 0) { rte_compressdev_pmd_destroy(priv->cdev); claim_zero(mlx5_glue->close_device(priv->ctx)); -- 2.17.1