From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 328EEA0C4D; Mon, 8 Nov 2021 14:10:02 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 74A6841149; Mon, 8 Nov 2021 14:09:47 +0100 (CET) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2072.outbound.protection.outlook.com [40.107.93.72]) by mails.dpdk.org (Postfix) with ESMTP id E415D41158; Mon, 8 Nov 2021 14:09:45 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FKMK6KrPNk8HyugC/xEzTvDSWPS0pR3MfBzAsteNnHU7aws6ai5XtrYUw4VLDuRK+YvybOD5+u4MNvM/BriZl55pWJHmP7dEdxNszY0ZUNsCwpzaET0uc0mo+k6RbrnoUOsJ6CLooDgCNf042TgPLJdOk4+ju/Skl/Tnt7AXRpGRqEs+c9eme9HfONqmezUpfTt+L5U4YYir630NfWp9Lqb3uA84+u4iEVddsZ7JQY/B5gYKKrvPpKqCrIxSCNngounaZauCMwW6bjsPFXOYhMpX8EWORy91dVW1PdZOj1h6DrZOSYOpP/sXmNtvrBkovFbfcUKRJeal1zKJf72zxA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=L9mSBTO7yui9k6QFOGdQJlq7bwGqthjllCNWgZGYqRs=; b=grAesTcrUXivHBKuNB9AOL3H6flKz/L4jMrqGK1FwO7f44OIRRRFKoHmn82En2fCZKznkf5DqhYVIUbkTBfgv07RfDCWsg6RFs6DoMMNIDNIXRW6n4Q2P0+XmAuah8rOhbsZf/Aw8y9JX7ae76wwjJ3lWZi7Gdfw2Kf84qkG4u0wPcDkskU34UM6rD2O+XBm1ymgz3+p4p1n1MUdUmTqeiJPJHLduoDyD0VvdhU5DEx1RMwdO9dactvbkH3tMBoa+WtJhX2mm4ELC6l53L0n2wki19ZcLh8gUhZdQTfrIKApu0uXjSlKL7Z7jNp15B2kN+u9MbY5t53bpBBBzUSXLg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=L9mSBTO7yui9k6QFOGdQJlq7bwGqthjllCNWgZGYqRs=; b=lX9sldubSMkZvvuh9NNAz7Nm9CoRZz0AvxsURgKntN1n7KQp2xTU9JY/n1k+AvEZ7jU7q6E9u1GTUlY1OYRBIV+OliT/xSz/VuxdouqnYrrqCWrhVB4g5yEXdzZqCnPB5ZCzySt1SYrfGIJ9VaFf8CUft8m4A1H6xT7ZPnwfGxy0sPtI0E7pVLRnsGdnjqZrbpvXQd2+pPjIHRgTu/lWMrHJh/toC1COwcLnrlvwDd7AWg/ml/zbsXvKjZ84tI4zIldrgjE5kuEbalz4IiQmHiqMwdUrNFA/QXC3mIneM4svBMkcTDy485/UWjhfRBxdg22rt2P0CCFqL9wLBB5IHA== Received: from MWHPR1701CA0024.namprd17.prod.outlook.com (2603:10b6:301:14::34) by BN8PR12MB2916.namprd12.prod.outlook.com (2603:10b6:408:6a::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Mon, 8 Nov 2021 13:09:43 +0000 Received: from CO1NAM11FT015.eop-nam11.prod.protection.outlook.com (2603:10b6:301:14:cafe::d8) by MWHPR1701CA0024.outlook.office365.com (2603:10b6:301:14::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10 via Frontend Transport; Mon, 8 Nov 2021 13:09:43 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by CO1NAM11FT015.mail.protection.outlook.com (10.13.175.130) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Mon, 8 Nov 2021 13:09:40 +0000 Received: from nvidia.com (172.20.187.6) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 8 Nov 2021 13:09:39 +0000 From: Raja Zidane To: CC: Matan Azrad , Date: Mon, 8 Nov 2021 13:09:20 +0000 Message-ID: <20211108130921.19143-4-rzidane@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20211108130921.19143-1-rzidane@nvidia.com> References: <20211108123354.2194-1-rzidane@nvidia.com> <20211108130921.19143-1-rzidane@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [172.20.187.6] X-ClientProxiedBy: HQMAIL107.nvidia.com (172.20.187.13) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6e848c4e-e90b-4ea4-2649-08d9a2b9065f X-MS-TrafficTypeDiagnostic: BN8PR12MB2916: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:4941; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: fzWfpHhk5JQTj1Ghhkhbi5lmcpaZyaY51VLp1y3t5WJDCq5FIeXHti5fInKe8OmNcaboakVzfX2XzCmY2lf6k7M4L6Ckd3NfH3HLVhuRzzpIcgio1WlLvRVIqrAwSuRCD+SHpyaLTYhm6kvnbkrX5hNYmMlYLjTp5kWHsLEoxTkNCLFlm3M4FIjYTpxJu7b8Q0QzEUzi8zzq74Yva3Aj/IknYkf+L/dKl257tMX/UoGte/tQdragnFKS29Fzl+qJ9VMZ9x9/fPGXeI6AkCRJACRE2XtKxxSUO5Hb6dloszTWJByGfcxGnams/tDPnyrK9L28Z/hszYCjgbirK6Ez+dnVHoytNDL4WrH8eWT+U7WBTXLw8E7eYP0eOCxrH4YPYLfe8i8oSkEE5NczxnaTCKbjLj/ErDqyFx/9MRM2+R43H2sdA44zq03P3dU42OasTV0regbIzwVHaENYE3+rUNTJ6aVeukQMg33NCv+BTeyb4hf6OuqBTh6aBoZHIgBmeeMuOj/EBqerSdRmGGUDECUG+UA+0uPghHSo7rOkJlgO6KSafsW8NMqXyqCGPmlN4C7wkqU0mtrZJ8nKD0GFCXdbqW2exJ8wwi1R1yEE4ApacdidWRyZTCCjyPnAuUy8fHhAFj+l9I84w8ADhVz64xTxB+rSkAzgyRq5PbT/MO+VchmuJ7/lRd1f7rcysv5g3esIYa+x1psarNgE9QHJcA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(316002)(26005)(426003)(86362001)(186003)(2616005)(8936002)(2906002)(16526019)(336012)(36860700001)(7636003)(356005)(55016002)(1076003)(82310400003)(83380400001)(4326008)(47076005)(508600001)(36756003)(6666004)(6286002)(8676002)(70586007)(70206006)(54906003)(7696005)(450100002)(6916009)(5660300002)(30864003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 08 Nov 2021 13:09:40.8044 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6e848c4e-e90b-4ea4-2649-08d9a2b9065f X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT015.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN8PR12MB2916 Subject: [dpdk-dev] [PATCH V2 3/4] crypto/mlx5: fix the queue size configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The DevX interface for QP creation expects the number of WQEBBs. Wrongly, the number of descriptors was provided to the QP creation. In addition, the QP size must be a power of 2 what was not guaranteed. Provide the number of WQEBBs to the QP creation API. Round up the SQ size to a power of 2. Rename (sq/rq)_size to num_of_(send/receive)_wqes. Fixes: 6152534e211e ("crypto/mlx5: support queue pairs operations") Cc: stable@dpdk.org Signed-off-by: Raja Zidane Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_devx_cmds.c | 14 +-- drivers/common/mlx5/mlx5_devx_cmds.h | 5 +- drivers/compress/mlx5/mlx5_compress.c | 4 +- drivers/crypto/mlx5/mlx5_crypto.c | 120 +++++++++++++++++++----- drivers/crypto/mlx5/mlx5_crypto.h | 7 ++ drivers/regex/mlx5/mlx5_regex_control.c | 4 +- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 4 +- 7 files changed, 120 insertions(+), 38 deletions(-) diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c index cecbf541f6..e52b995ee3 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.c +++ b/drivers/common/mlx5/mlx5_devx_cmds.c @@ -832,6 +832,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx, MLX5_HCA_CAP_OPMOD_GET_CUR); if (!hcattr) return rc; + attr->max_wqe_sz_sq = MLX5_GET(cmd_hca_cap, hcattr, max_wqe_sz_sq); attr->flow_counter_bulk_alloc_bitmap = MLX5_GET(cmd_hca_cap, hcattr, flow_counter_bulk_alloc); attr->flow_counters_dump = MLX5_GET(cmd_hca_cap, hcattr, @@ -2153,21 +2154,22 @@ mlx5_devx_cmd_create_qp(void *ctx, if (attr->log_page_size > MLX5_ADAPTER_PAGE_SHIFT) MLX5_SET(qpc, qpc, log_page_size, attr->log_page_size - MLX5_ADAPTER_PAGE_SHIFT); - if (attr->sq_size) { - MLX5_ASSERT(RTE_IS_POWER_OF_2(attr->sq_size)); + if (attr->num_of_send_wqbbs) { + MLX5_ASSERT(RTE_IS_POWER_OF_2(attr->num_of_send_wqbbs)); MLX5_SET(qpc, qpc, cqn_snd, attr->cqn); MLX5_SET(qpc, qpc, log_sq_size, - rte_log2_u32(attr->sq_size)); + rte_log2_u32(attr->num_of_send_wqbbs)); } else { MLX5_SET(qpc, qpc, no_sq, 1); } - if (attr->rq_size) { - MLX5_ASSERT(RTE_IS_POWER_OF_2(attr->rq_size)); + if (attr->num_of_receive_wqes) { + MLX5_ASSERT(RTE_IS_POWER_OF_2( + attr->num_of_receive_wqes)); MLX5_SET(qpc, qpc, cqn_rcv, attr->cqn); MLX5_SET(qpc, qpc, log_rq_stride, attr->log_rq_stride - MLX5_LOG_RQ_STRIDE_SHIFT); MLX5_SET(qpc, qpc, log_rq_size, - rte_log2_u32(attr->rq_size)); + rte_log2_u32(attr->num_of_receive_wqes)); MLX5_SET(qpc, qpc, rq_type, MLX5_NON_ZERO_RQ); } else { MLX5_SET(qpc, qpc, rq_type, MLX5_ZERO_LEN_RQ); diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h index 447f76f1f9..d7f71646a3 100644 --- a/drivers/common/mlx5/mlx5_devx_cmds.h +++ b/drivers/common/mlx5/mlx5_devx_cmds.h @@ -251,6 +251,7 @@ struct mlx5_hca_attr { uint32_t log_max_mmo_decompress:5; uint32_t umr_modify_entity_size_disabled:1; uint32_t umr_indirect_mkey_disabled:1; + uint16_t max_wqe_sz_sq; }; /* LAG Context. */ @@ -477,9 +478,9 @@ struct mlx5_devx_qp_attr { uint32_t uar_index:24; uint32_t cqn:24; uint32_t log_page_size:5; - uint32_t rq_size:17; /* Must be power of 2. */ + uint32_t num_of_receive_wqes:17; /* Must be power of 2. */ uint32_t log_rq_stride:3; - uint32_t sq_size:17; /* Must be power of 2. */ + uint32_t num_of_send_wqbbs:17; /* Must be power of 2. */ uint32_t ts_format:2; uint32_t dbr_umem_valid:1; uint32_t dbr_umem_id; diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index d5511aebdf..7813af38e6 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -244,8 +244,8 @@ mlx5_compress_qp_setup(struct rte_compressdev *dev, uint16_t qp_id, qp_attr.cqn = qp->cq.cq->id; qp_attr.ts_format = mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); - qp_attr.rq_size = 0; - qp_attr.sq_size = RTE_BIT32(log_ops_n); + qp_attr.num_of_receive_wqes = 0; + qp_attr.num_of_send_wqbbs = RTE_BIT32(log_ops_n); qp_attr.mmo = priv->mmo_decomp_qp && priv->mmo_comp_qp && priv->mmo_dma_qp; ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp, log_ops_n, &qp_attr, diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index 1d0f1f3cfc..9fdbee9be1 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -545,7 +545,7 @@ mlx5_crypto_qp_init(struct mlx5_crypto_priv *priv, struct mlx5_crypto_qp *qp) ucseg->if_cf_toe_cq_res = RTE_BE32(1u << MLX5_UMRC_IF_OFFSET); ucseg->mkey_mask = RTE_BE64(1u << 0); /* Mkey length bit. */ ucseg->ko_to_bs = rte_cpu_to_be_32 - ((RTE_ALIGN(priv->max_segs_num, 4u) << + ((MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size) << MLX5_UMRC_KO_OFFSET) | (4 << MLX5_UMRC_TO_BS_OFFSET)); bsf->keytag = priv->keytag; /* Init RDMA WRITE WQE. */ @@ -569,7 +569,7 @@ mlx5_crypto_indirect_mkeys_prepare(struct mlx5_crypto_priv *priv, .umr_en = 1, .crypto_en = 1, .set_remote_rw = 1, - .klm_num = RTE_ALIGN(priv->max_segs_num, 4), + .klm_num = MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size), }; for (umr = (struct mlx5_umr_wqe *)qp->qp_obj.umem_buf, i = 0; @@ -597,6 +597,7 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, uint16_t log_nb_desc = rte_log2_u32(qp_conf->nb_descriptors); uint32_t ret; uint32_t alloc_size = sizeof(*qp); + uint32_t log_wqbb_n; struct mlx5_devx_cq_attr cq_attr = { .uar_page_id = mlx5_os_get_devx_uar_page_id(priv->uar.obj), }; @@ -619,14 +620,16 @@ mlx5_crypto_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id, DRV_LOG(ERR, "Failed to create CQ."); goto error; } + log_wqbb_n = rte_log2_u32(RTE_BIT32(log_nb_desc) * + (priv->wqe_set_size / MLX5_SEND_WQE_BB)); attr.pd = priv->cdev->pdn; attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj); attr.cqn = qp->cq_obj.cq->id; - attr.rq_size = 0; - attr.sq_size = RTE_BIT32(log_nb_desc); + attr.num_of_receive_wqes = 0; + attr.num_of_send_wqbbs = RTE_BIT32(log_wqbb_n); attr.ts_format = mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); - ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, log_nb_desc, + ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp->qp_obj, log_wqbb_n, &attr, socket_id); if (ret) { DRV_LOG(ERR, "Failed to create QP."); @@ -747,10 +750,8 @@ mlx5_crypto_args_check_handler(const char *key, const char *val, void *opaque) return -errno; } if (strcmp(key, "max_segs_num") == 0) { - if (!tmp || tmp > MLX5_CRYPTO_MAX_SEGS) { - DRV_LOG(WARNING, "Invalid max_segs_num: %d, should" - " be less than %d.", - (uint32_t)tmp, MLX5_CRYPTO_MAX_SEGS); + if (!tmp) { + DRV_LOG(ERR, "max_segs_num must be greater than 0."); rte_errno = EINVAL; return -rte_errno; } @@ -809,6 +810,81 @@ mlx5_crypto_parse_devargs(struct rte_devargs *devargs, return 0; } +/* + * Calculate UMR WQE size and RDMA Write WQE size with the + * following limitations: + * - Each WQE size is multiple of 64. + * - The summarize of both UMR WQE and RDMA_W WQE is a power of 2. + * - The number of entries in the UMR WQE's KLM list is multiple of 4. + */ +static void +mlx5_crypto_get_wqe_sizes(uint32_t segs_num, uint32_t *umr_size, + uint32_t *rdmaw_size) +{ + uint32_t diff, wqe_set_size; + + *umr_size = MLX5_CRYPTO_UMR_WQE_STATIC_SIZE + + RTE_ALIGN(segs_num, 4) * + sizeof(struct mlx5_wqe_dseg); + /* Make sure UMR WQE size is multiple of WQBB. */ + *umr_size = RTE_ALIGN(*umr_size, MLX5_SEND_WQE_BB); + *rdmaw_size = sizeof(struct mlx5_rdma_write_wqe) + + sizeof(struct mlx5_wqe_dseg) * + (segs_num <= 2 ? 2 : 2 + + RTE_ALIGN(segs_num - 2, 4)); + /* Make sure RDMA_WRITE WQE size is multiple of WQBB. */ + *rdmaw_size = RTE_ALIGN(*rdmaw_size, MLX5_SEND_WQE_BB); + wqe_set_size = *rdmaw_size + *umr_size; + diff = rte_align32pow2(wqe_set_size) - wqe_set_size; + /* Make sure wqe_set size is power of 2. */ + if (diff) + *umr_size += diff; +} + +static uint8_t +mlx5_crypto_max_segs_num(uint16_t max_wqe_size) +{ + int klms_sizes = max_wqe_size - MLX5_CRYPTO_UMR_WQE_STATIC_SIZE; + uint32_t max_segs_cap = RTE_ALIGN_FLOOR(klms_sizes, MLX5_SEND_WQE_BB) / + sizeof(struct mlx5_wqe_dseg); + + MLX5_ASSERT(klms_sizes >= MLX5_SEND_WQE_BB); + while (max_segs_cap) { + uint32_t umr_wqe_size, rdmw_wqe_size; + + mlx5_crypto_get_wqe_sizes(max_segs_cap, &umr_wqe_size, + &rdmw_wqe_size); + if (umr_wqe_size <= max_wqe_size && + rdmw_wqe_size <= max_wqe_size) + break; + max_segs_cap -= 4; + } + return max_segs_cap; +} + +static int +mlx5_crypto_configure_wqe_size(struct mlx5_crypto_priv *priv, + uint16_t max_wqe_size, uint32_t max_segs_num) +{ + uint32_t rdmw_wqe_size, umr_wqe_size; + + mlx5_crypto_get_wqe_sizes(max_segs_num, &umr_wqe_size, + &rdmw_wqe_size); + priv->wqe_set_size = rdmw_wqe_size + umr_wqe_size; + if (umr_wqe_size > max_wqe_size || + rdmw_wqe_size > max_wqe_size) { + DRV_LOG(ERR, "Invalid max_segs_num: %u. should be %u or lower.", + max_segs_num, + mlx5_crypto_max_segs_num(max_wqe_size)); + rte_errno = EINVAL; + return -EINVAL; + } + priv->umr_wqe_size = (uint16_t)umr_wqe_size; + priv->umr_wqe_stride = priv->umr_wqe_size / MLX5_SEND_WQE_BB; + priv->max_rdmar_ds = rdmw_wqe_size / sizeof(struct mlx5_wqe_dseg); + return 0; +} + static int mlx5_crypto_dev_probe(struct mlx5_common_device *cdev) { @@ -824,7 +900,6 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev) RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS, }; const char *ibdev_name = mlx5_os_get_ctx_device_name(cdev->ctx); - uint16_t rdmw_wqe_size; int ret; if (rte_eal_process_type() != RTE_PROC_PRIMARY) { @@ -873,20 +948,17 @@ mlx5_crypto_dev_probe(struct mlx5_common_device *cdev) } priv->login_obj = login; priv->keytag = rte_cpu_to_be_64(devarg_prms.keytag); - priv->max_segs_num = devarg_prms.max_segs_num; - priv->umr_wqe_size = sizeof(struct mlx5_wqe_umr_bsf_seg) + - sizeof(struct mlx5_wqe_cseg) + - sizeof(struct mlx5_wqe_umr_cseg) + - sizeof(struct mlx5_wqe_mkey_cseg) + - RTE_ALIGN(priv->max_segs_num, 4) * - sizeof(struct mlx5_wqe_dseg); - rdmw_wqe_size = sizeof(struct mlx5_rdma_write_wqe) + - sizeof(struct mlx5_wqe_dseg) * - (priv->max_segs_num <= 2 ? 2 : 2 + - RTE_ALIGN(priv->max_segs_num - 2, 4)); - priv->wqe_set_size = priv->umr_wqe_size + rdmw_wqe_size; - priv->umr_wqe_stride = priv->umr_wqe_size / MLX5_SEND_WQE_BB; - priv->max_rdmar_ds = rdmw_wqe_size / sizeof(struct mlx5_wqe_dseg); + ret = mlx5_crypto_configure_wqe_size(priv, + cdev->config.hca_attr.max_wqe_sz_sq, devarg_prms.max_segs_num); + if (ret) { + mlx5_devx_uar_release(&priv->uar); + rte_cryptodev_pmd_destroy(priv->crypto_dev); + return -1; + } + DRV_LOG(INFO, "Max number of segments: %u.", + (unsigned int)RTE_MIN( + MLX5_CRYPTO_KLM_SEGS_NUM(priv->umr_wqe_size), + (uint16_t)(priv->max_rdmar_ds - 2))); pthread_mutex_lock(&priv_list_lock); TAILQ_INSERT_TAIL(&mlx5_crypto_priv_list, priv, next); pthread_mutex_unlock(&priv_list_lock); diff --git a/drivers/crypto/mlx5/mlx5_crypto.h b/drivers/crypto/mlx5/mlx5_crypto.h index 135cd78212..f04b3d8c20 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.h +++ b/drivers/crypto/mlx5/mlx5_crypto.h @@ -16,6 +16,13 @@ #define MLX5_CRYPTO_DEK_HTABLE_SZ (1 << 11) #define MLX5_CRYPTO_KEY_LENGTH 80 +#define MLX5_CRYPTO_UMR_WQE_STATIC_SIZE (sizeof(struct mlx5_wqe_cseg) +\ + sizeof(struct mlx5_wqe_umr_cseg) +\ + sizeof(struct mlx5_wqe_mkey_cseg) +\ + sizeof(struct mlx5_wqe_umr_bsf_seg)) +#define MLX5_CRYPTO_KLM_SEGS_NUM(umr_wqe_sz) ((umr_wqe_sz -\ + MLX5_CRYPTO_UMR_WQE_STATIC_SIZE) /\ + MLX5_WSEG_SIZE) struct mlx5_crypto_priv { TAILQ_ENTRY(mlx5_crypto_priv) next; diff --git a/drivers/regex/mlx5/mlx5_regex_control.c b/drivers/regex/mlx5/mlx5_regex_control.c index d184b1a921..46e400a93f 100644 --- a/drivers/regex/mlx5/mlx5_regex_control.c +++ b/drivers/regex/mlx5/mlx5_regex_control.c @@ -149,8 +149,8 @@ regex_ctrl_create_hw_qp(struct mlx5_regex_priv *priv, struct mlx5_regex_qp *qp, qp_obj->qpn = q_ind; qp_obj->ci = 0; qp_obj->pi = 0; - attr.rq_size = 0; - attr.sq_size = RTE_BIT32(MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, + attr.num_of_receive_wqes = 0; + attr.num_of_send_wqbbs = RTE_BIT32(MLX5_REGEX_WQE_LOG_NUM(priv->has_umr, log_nb_desc)); attr.mmo = priv->mmo_regex_qp_cap; ret = mlx5_devx_qp_create(priv->cdev->ctx, &qp_obj->qp_obj, diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 9cc71714a2..657c39dae1 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -589,9 +589,9 @@ mlx5_vdpa_event_qp_create(struct mlx5_vdpa_priv *priv, uint16_t desc_n, } attr.uar_index = mlx5_os_get_devx_uar_page_id(priv->uar.obj); attr.cqn = eqp->cq.cq_obj.cq->id; - attr.rq_size = RTE_BIT32(log_desc_n); + attr.num_of_receive_wqes = RTE_BIT32(log_desc_n); attr.log_rq_stride = rte_log2_u32(MLX5_WSEG_SIZE); - attr.sq_size = 0; /* No need SQ. */ + attr.num_of_send_wqbbs = 0; /* No need SQ. */ attr.ts_format = mlx5_ts_format_conv(priv->cdev->config.hca_attr.qp_ts_format); ret = mlx5_devx_qp_create(priv->cdev->ctx, &(eqp->sw_qp), log_desc_n, -- 2.17.1