From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A9A5CA00C3 for ; Wed, 23 Feb 2022 17:07:28 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A305B41145; Wed, 23 Feb 2022 17:07:28 +0100 (CET) Received: from NAM02-SN1-obe.outbound.protection.outlook.com (mail-sn1anam02on2042.outbound.protection.outlook.com [40.107.96.42]) by mails.dpdk.org (Postfix) with ESMTP id B7D3941145 for ; Wed, 23 Feb 2022 17:07:27 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kqVtI4obRBGyKFaniVLTjKTUnlaBbrUEkBYcHHSQypaCqZOp/Vkw9BtHGKIsNEZLhUwnW/j1CMnL/Z0s5Jci7fYQ5hPbkadYLU6qAfWB+ypWK1tBRcaQKQEvnHUvNbtYsFmt68nR3rztMdB0qNbR1rKKH6kOMCcASksh81T4Eg7ads04oJi4ws0EX8ro0eqQ1RJb+RLbXzpfe/qUy+7qlJgCz6FWlZqtByiEpDLM1BLeu+JB1fEUK5UNLgPw+4tw4AZBeai1e3oeO/ie7QYMSp3oWWrc+1qJhxfs32XNpYx7Ln2LvEuOqgE+a3vwPdTqp+Ec5OIFMpzalJuClaG5rg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RW7ybgZ7DxvhbvX8d5zif81ljua/TQmMtzLrtmfEEhY=; b=NnjxAPm6850PIrmcX4aOXYTOFfxic1igSHs4Kvseqhh43ZPh2ONQR1XX6Ueew3CNkxcKmB+RwykFtIopnTiMfZqbtBe+eMyS9jG2OLIFZ2ArpD7czy2zrwTY8GXJFr0MjFYHktSylKxCTY9mO8OsTkgDyrY6gLprw0viZb+QtYGZFIIMizW6SVjrtx+7uU5tgt+U7jRMwgTf5ZbFNyEWC3kEHcYTskFX90pxv1iNIdRCyQVt+e3xFuDefB4bEcyEIWqVS/IR3ADqNMnE4b1choMc2ntQ7+4ozgPLFx5te86CzXVQiqJuHHKfCT+WXaC9LhjCeALkc7G49jKxKv0kwQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 12.22.5.234) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RW7ybgZ7DxvhbvX8d5zif81ljua/TQmMtzLrtmfEEhY=; b=bXgU8H9RcOrT3kxlKPlnHe9+KaUjowNrD3dcc1LevycsY3RFwaQowdaMzJnMa+uBh0MzDPexnM9z0CGSDrGHrMAAFWvJicUTyFLIRaTU4+r9SlnmzJQVzN27f+lOnZuXYqNpdRAy46vgeZD4/FQqkZEb5OihJK8XImTqWKTmAIylqYxTPKctDjlokrN0LyEpI0uOPlkQ4a1egOhfEufvoSTzWe2zKcoZsnILZ6eOVKyqfgqtI/wtsdmAtpYSoKf13I/zHQ1WLvdZrrU0j49pwmQSHE9LsmLvUlTZk6Vfv0Zz//vRjLGD8LUws/AaOoQjibob+s2rwGFxKgaYJqUhZw== Received: from MW4PR03CA0348.namprd03.prod.outlook.com (2603:10b6:303:dc::23) by BN7PR12MB2787.namprd12.prod.outlook.com (2603:10b6:408:2b::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4995.24; Wed, 23 Feb 2022 16:07:25 +0000 Received: from CO1NAM11FT057.eop-nam11.prod.protection.outlook.com (2603:10b6:303:dc:cafe::9e) by MW4PR03CA0348.outlook.office365.com (2603:10b6:303:dc::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5017.21 via Frontend Transport; Wed, 23 Feb 2022 16:07:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 12.22.5.234) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 12.22.5.234 as permitted sender) receiver=protection.outlook.com; client-ip=12.22.5.234; helo=mail.nvidia.com; Received: from mail.nvidia.com (12.22.5.234) by CO1NAM11FT057.mail.protection.outlook.com (10.13.174.205) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.5017.22 via Frontend Transport; Wed, 23 Feb 2022 16:07:25 +0000 Received: from rnnvmail205.nvidia.com (10.129.68.10) by DRHQMAIL101.nvidia.com (10.27.9.10) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 23 Feb 2022 16:07:24 +0000 Received: from rnnvmail202.nvidia.com (10.129.68.7) by rnnvmail205.nvidia.com (10.129.68.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9; Wed, 23 Feb 2022 08:07:22 -0800 Received: from nvidia.com (10.127.8.13) by mail.nvidia.com (10.129.68.7) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.9 via Frontend Transport; Wed, 23 Feb 2022 08:07:21 -0800 From: Michael Baum To: CC: Matan Azrad , Viacheslav Ovsiienko Subject: [PATCH 20.11 v2 2/5] net/mlx5: improve stride parameter names Date: Wed, 23 Feb 2022 18:07:10 +0200 Message-ID: <20220223160713.2992784-3-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220223160713.2992784-1-michaelba@nvidia.com> References: <20220223160713.2992784-1-michaelba@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 56fc743c-0ca0-47d2-90b6-08d9f6e694fc X-MS-TrafficTypeDiagnostic: BN7PR12MB2787:EE_ X-Microsoft-Antispam-PRVS: X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: QiKy47uUIuwtRGucDLy8v07UBkTm2KXzNymG+UPQC68B6t77HNFDPVr4ZuV9KVgVlp2ayrajv+x6Ms3va82S532vjpnhLqjH4szXzMYERK67LthmGkOR2BL3f3NAr4R5eqWoct7E7WBiwuBlTxK5pmFZFQxW7YQM3ImBFl4FZ2zGr1if+ofLG6hI5Cmy0YrLHJ2yitTRsGyFWB9hSl3pbdNwBSagZ2SP6965YlY+05EAqtRmDiAKD2V2PMZW505XWZV536BtVeZxph5Q22CZhGDF/l19fG0M8whVqDGcFRgZq3e/aUHakS7REOzqWPVBKX+HX0WWOpuE/JN9QqmnG/spxKD8U+BfDT5fANm2NYRnRB/Pf4e/b/Rdn8EONOyprBzL09KNrz9lV2LiBXZlmgu2X3KSBHZLnYjBvnkyMKt29jJCocPAMWvKmLjvrUaKSGBcBLlkxWETJ45jbZEhoP1qnYEtFg6SDjkH/NWSxrolJfWHNj+X3nSM56YxKEJ3K94S1KH10U4KO8VAgWkhYqGYWDfxEikNHZRGCN3kO+PAyoPQueJZ682wPGNYL+Pbm1JNc6xT3TEcPOno78eR6g17gUL3+uME8IwBTlR0cISr/HTnKTdRQTTjhg/mW88ZgRe2yWpBT13WqyE9xXeKRRRIrmh3c2hFN0PfDQWEFvDv28JK701XrEyzzxfz6zWhVcw88qohDmjlfRNCHwxZQTm1itDb6OqSO/knYBsDsePUb7US/c9kvTGrWA7wgyaE X-Forefront-Antispam-Report: CIP:12.22.5.234; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:mail.nvidia.com; PTR:InfoNoRecords; CAT:NONE; SFS:(13230001)(4636009)(40470700004)(36840700001)(46966006)(1076003)(5660300002)(2616005)(6286002)(6916009)(40460700003)(86362001)(508600001)(107886003)(8936002)(47076005)(26005)(54906003)(186003)(36756003)(6666004)(7696005)(36860700001)(30864003)(2906002)(316002)(70206006)(356005)(4326008)(8676002)(83380400001)(70586007)(81166007)(55016003)(82310400004)(336012)(426003)(36900700001)(309714004); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 23 Feb 2022 16:07:25.0564 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 56fc743c-0ca0-47d2-90b6-08d9f6e694fc X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[12.22.5.234]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: CO1NAM11FT057.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN7PR12MB2787 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org [ upstream commit 0947ed380febad9d6f794b6f4e9aa9137860a06e ] In the striding RQ management there are two important parameters, the size of the single stride in bytes and the number of strides. Both the data-path structure and config structure keep the log of the above parameters. However, in their names there is no mention that the value is a log which may be misleading as if the fields represent the values themselves. This patch updates their names describing the values more accurately. Fixes: ecb160456aed ("net/mlx5: add device parameter for MPRQ stride size") Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 36 +++++----- drivers/net/mlx5/linux/mlx5_verbs.c | 4 +- drivers/net/mlx5/mlx5.c | 4 +- drivers/net/mlx5/mlx5.h | 8 +-- drivers/net/mlx5/mlx5_defs.h | 4 +- drivers/net/mlx5/mlx5_devx.c | 4 +- drivers/net/mlx5/mlx5_rxq.c | 104 +++++++++++++++------------- drivers/net/mlx5/mlx5_rxtx.c | 22 +++--- drivers/net/mlx5/mlx5_rxtx.h | 10 +-- drivers/net/mlx5/mlx5_rxtx_vec.c | 8 +-- 10 files changed, 105 insertions(+), 99 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 375f8ad984..6393cc5007 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -1383,34 +1383,34 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, DRV_LOG(DEBUG, "FCS stripping configuration is %ssupported", (config->hw_fcs_strip ? "" : "not ")); if (config->mprq.enabled && mprq) { - if (config->mprq.stride_num_n && - (config->mprq.stride_num_n > mprq_max_stride_num_n || - config->mprq.stride_num_n < mprq_min_stride_num_n)) { - config->mprq.stride_num_n = - RTE_MIN(RTE_MAX(MLX5_MPRQ_STRIDE_NUM_N, - mprq_min_stride_num_n), - mprq_max_stride_num_n); + if (config->mprq.log_stride_num && + (config->mprq.log_stride_num > mprq_max_stride_num_n || + config->mprq.log_stride_num < mprq_min_stride_num_n)) { + config->mprq.log_stride_num = + RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM, + mprq_min_stride_num_n), + mprq_max_stride_num_n); DRV_LOG(WARNING, "the number of strides" " for Multi-Packet RQ is out of range," " setting default value (%u)", - 1 << config->mprq.stride_num_n); + 1 << config->mprq.log_stride_num); } - if (config->mprq.stride_size_n && - (config->mprq.stride_size_n > mprq_max_stride_size_n || - config->mprq.stride_size_n < mprq_min_stride_size_n)) { - config->mprq.stride_size_n = - RTE_MIN(RTE_MAX(MLX5_MPRQ_STRIDE_SIZE_N, - mprq_min_stride_size_n), - mprq_max_stride_size_n); + if (config->mprq.log_stride_size && + (config->mprq.log_stride_size > mprq_max_stride_size_n || + config->mprq.log_stride_size < mprq_min_stride_size_n)) { + config->mprq.log_stride_size = + RTE_MIN(RTE_MAX(MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE, + mprq_min_stride_size_n), + mprq_max_stride_size_n); DRV_LOG(WARNING, "the size of a stride" " for Multi-Packet RQ is out of range," " setting default value (%u)", - 1 << config->mprq.stride_size_n); + 1 << config->mprq.log_stride_size); } - config->mprq.min_stride_size_n = mprq_min_stride_size_n; - config->mprq.max_stride_size_n = mprq_max_stride_size_n; + config->mprq.log_min_stride_size = mprq_min_stride_size_n; + config->mprq.log_max_stride_size = mprq_max_stride_size_n; } else if (config->mprq.enabled && !mprq) { DRV_LOG(WARNING, "Multi-Packet RQ isn't supported"); config->mprq.enabled = 0; diff --git a/drivers/net/mlx5/linux/mlx5_verbs.c b/drivers/net/mlx5/linux/mlx5_verbs.c index 95e8eb06d1..29e569c321 100644 --- a/drivers/net/mlx5/linux/mlx5_verbs.c +++ b/drivers/net/mlx5/linux/mlx5_verbs.c @@ -317,8 +317,8 @@ mlx5_rxq_ibv_wq_create(struct rte_eth_dev *dev, uint16_t idx) wq_attr.mlx5.comp_mask |= MLX5DV_WQ_INIT_ATTR_MASK_STRIDING_RQ; *mprq_attr = (struct mlx5dv_striding_rq_init_attr){ - .single_stride_log_num_of_bytes = rxq_data->strd_sz_n, - .single_wqe_log_num_of_strides = rxq_data->strd_num_n, + .single_stride_log_num_of_bytes = rxq_data->log_strd_sz, + .single_wqe_log_num_of_strides = rxq_data->log_strd_num, .two_byte_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT, }; } diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 0af0646f51..cff1188213 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1645,9 +1645,9 @@ mlx5_args_check(const char *key, const char *val, void *opaque) } else if (strcmp(MLX5_RX_MPRQ_EN, key) == 0) { config->mprq.enabled = !!tmp; } else if (strcmp(MLX5_RX_MPRQ_LOG_STRIDE_NUM, key) == 0) { - config->mprq.stride_num_n = tmp; + config->mprq.log_stride_num = tmp; } else if (strcmp(MLX5_RX_MPRQ_LOG_STRIDE_SIZE, key) == 0) { - config->mprq.stride_size_n = tmp; + config->mprq.log_stride_size = tmp; } else if (strcmp(MLX5_RX_MPRQ_MAX_MEMCPY_LEN, key) == 0) { config->mprq.max_memcpy_len = tmp; } else if (strcmp(MLX5_RXQS_MIN_MPRQ, key) == 0) { diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 93d9ad5e64..071e8c6caf 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -228,10 +228,10 @@ struct mlx5_dev_config { unsigned int dv_miss_info:1; /* restore packet after partial hw miss */ struct { unsigned int enabled:1; /* Whether MPRQ is enabled. */ - unsigned int stride_num_n; /* Number of strides. */ - unsigned int stride_size_n; /* Size of a stride. */ - unsigned int min_stride_size_n; /* Min size of a stride. */ - unsigned int max_stride_size_n; /* Max size of a stride. */ + unsigned int log_stride_num; /* Log number of strides. */ + unsigned int log_stride_size; /* Log size of a stride. */ + unsigned int log_min_stride_size; /* Log min size of a stride.*/ + unsigned int log_max_stride_size; /* Log max size of a stride.*/ unsigned int max_memcpy_len; /* Maximum packet size to memcpy Rx packets. */ unsigned int min_rxqs_num; diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h index db6f128f62..ee5c61409c 100644 --- a/drivers/net/mlx5/mlx5_defs.h +++ b/drivers/net/mlx5/mlx5_defs.h @@ -138,10 +138,10 @@ #endif /* Log 2 of the default number of strides per WQE for Multi-Packet RQ. */ -#define MLX5_MPRQ_STRIDE_NUM_N 6U +#define MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM 6U /* Log 2 of the default size of a stride per WQE for Multi-Packet RQ. */ -#define MLX5_MPRQ_STRIDE_SIZE_N 11U +#define MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE 11U /* Two-byte shift is disabled for Multi-Packet RQ. */ #define MLX5_MPRQ_TWO_BYTE_SHIFT 0 diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index ac1939415b..b2c770f537 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -348,11 +348,11 @@ mlx5_rxq_create_devx_rq_resources(struct rte_eth_dev *dev, uint16_t idx) * 512*2^single_wqe_log_num_of_strides. */ rq_attr.wq_attr.single_wqe_log_num_of_strides = - rxq_data->strd_num_n - + rxq_data->log_strd_num - MLX5_MIN_SINGLE_WQE_LOG_NUM_STRIDES; /* Stride size = (2^single_stride_log_num_of_bytes)*64B. */ rq_attr.wq_attr.single_stride_log_num_of_bytes = - rxq_data->strd_sz_n - + rxq_data->log_strd_sz - MLX5_MIN_SINGLE_STRIDE_LOG_NUM_BYTES; wqe_size = sizeof(struct mlx5_wqe_mprq); } else { diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index d7e5d194e3..b83a2f2d60 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -80,7 +80,7 @@ mlx5_check_mprq_support(struct rte_eth_dev *dev) inline int mlx5_rxq_mprq_enabled(struct mlx5_rxq_data *rxq) { - return rxq->strd_num_n > 0; + return rxq->log_strd_num > 0; } /** @@ -135,7 +135,7 @@ mlx5_rxq_cqe_num(struct mlx5_rxq_data *rxq_data) unsigned int wqe_n = 1 << rxq_data->elts_n; if (mlx5_rxq_mprq_enabled(rxq_data)) - cqe_n = wqe_n * (1 << rxq_data->strd_num_n) - 1; + cqe_n = wqe_n * RTE_BIT32(rxq_data->log_strd_num) - 1; else cqe_n = wqe_n - 1; return cqe_n; @@ -205,8 +205,9 @@ rxq_alloc_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) { const unsigned int sges_n = 1 << rxq_ctrl->rxq.sges_n; unsigned int elts_n = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ? - (1 << rxq_ctrl->rxq.elts_n) * (1 << rxq_ctrl->rxq.strd_num_n) : - (1 << rxq_ctrl->rxq.elts_n); + RTE_BIT32(rxq_ctrl->rxq.elts_n) * + RTE_BIT32(rxq_ctrl->rxq.log_strd_num) : + RTE_BIT32(rxq_ctrl->rxq.elts_n); bool has_vec_support = mlx5_rxq_check_vec_support(&rxq_ctrl->rxq) > 0; unsigned int i; int err; @@ -347,8 +348,8 @@ rxq_free_elts_sprq(struct mlx5_rxq_ctrl *rxq_ctrl) { struct mlx5_rxq_data *rxq = &rxq_ctrl->rxq; const uint16_t q_n = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ? - (1 << rxq->elts_n) * (1 << rxq->strd_num_n) : - (1 << rxq->elts_n); + RTE_BIT32(rxq->elts_n) * RTE_BIT32(rxq->log_strd_num) : + RTE_BIT32(rxq->elts_n); const uint16_t q_mask = q_n - 1; uint16_t elts_ci = mlx5_rxq_mprq_enabled(&rxq_ctrl->rxq) ? rxq->elts_ci : rxq->rq_ci; @@ -1235,8 +1236,8 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) unsigned int buf_len; unsigned int obj_num; unsigned int obj_size; - unsigned int strd_num_n = 0; - unsigned int strd_sz_n = 0; + unsigned int log_strd_num = 0; + unsigned int log_strd_sz = 0; unsigned int i; unsigned int n_ibv = 0; @@ -1253,16 +1254,18 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) n_ibv++; desc += 1 << rxq->elts_n; /* Get the max number of strides. */ - if (strd_num_n < rxq->strd_num_n) - strd_num_n = rxq->strd_num_n; + if (log_strd_num < rxq->log_strd_num) + log_strd_num = rxq->log_strd_num; /* Get the max size of a stride. */ - if (strd_sz_n < rxq->strd_sz_n) - strd_sz_n = rxq->strd_sz_n; - } - MLX5_ASSERT(strd_num_n && strd_sz_n); - buf_len = (1 << strd_num_n) * (1 << strd_sz_n); - obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + (1 << strd_num_n) * - sizeof(struct rte_mbuf_ext_shared_info) + RTE_PKTMBUF_HEADROOM; + if (log_strd_sz < rxq->log_strd_sz) + log_strd_sz = rxq->log_strd_sz; + } + MLX5_ASSERT(log_strd_num && log_strd_sz); + buf_len = RTE_BIT32(log_strd_num) * RTE_BIT32(log_strd_sz); + obj_size = sizeof(struct mlx5_mprq_buf) + buf_len + + RTE_BIT32(log_strd_num) * + sizeof(struct rte_mbuf_ext_shared_info) + + RTE_PKTMBUF_HEADROOM; /* * Received packets can be either memcpy'd or externally referenced. In * case that the packet is attached to an mbuf as an external buffer, as @@ -1308,7 +1311,7 @@ mlx5_mprq_alloc_mp(struct rte_eth_dev *dev) snprintf(name, sizeof(name), "port-%u-mprq", dev->data->port_id); mp = rte_mempool_create(name, obj_num, obj_size, MLX5_MPRQ_MP_CACHE_SZ, 0, NULL, NULL, mlx5_mprq_buf_init, - (void *)((uintptr_t)1 << strd_num_n), + (void *)((uintptr_t)1 << log_strd_num), dev->device->numa_node, 0); if (mp == NULL) { DRV_LOG(ERR, @@ -1413,15 +1416,18 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, unsigned int first_mb_free_size = mb_len - RTE_PKTMBUF_HEADROOM; const int mprq_en = mlx5_check_mprq_support(dev) > 0 && n_seg == 1 && !rx_seg[0].offset && !rx_seg[0].length; - unsigned int mprq_stride_nums = config->mprq.stride_num_n ? - config->mprq.stride_num_n : MLX5_MPRQ_STRIDE_NUM_N; - unsigned int mprq_stride_size = non_scatter_min_mbuf_size <= - (1U << config->mprq.max_stride_size_n) ? - log2above(non_scatter_min_mbuf_size) : MLX5_MPRQ_STRIDE_SIZE_N; - unsigned int mprq_stride_cap = (config->mprq.stride_num_n ? - (1U << config->mprq.stride_num_n) : (1U << mprq_stride_nums)) * - (config->mprq.stride_size_n ? - (1U << config->mprq.stride_size_n) : (1U << mprq_stride_size)); + unsigned int log_mprq_stride_nums = config->mprq.log_stride_num ? + config->mprq.log_stride_num : MLX5_MPRQ_DEFAULT_LOG_STRIDE_NUM; + unsigned int log_mprq_stride_size = non_scatter_min_mbuf_size <= + RTE_BIT32(config->mprq.log_max_stride_size) ? + log2above(non_scatter_min_mbuf_size) : + MLX5_MPRQ_DEFAULT_LOG_STRIDE_SIZE; + unsigned int mprq_stride_cap = (config->mprq.log_stride_num ? + RTE_BIT32(config->mprq.log_stride_num) : + RTE_BIT32(log_mprq_stride_nums)) * + (config->mprq.log_stride_size ? + RTE_BIT32(config->mprq.log_stride_size) : + RTE_BIT32(log_mprq_stride_size)); /* * Always allocate extra slots, even if eventually * the vector Rx will not be used. @@ -1433,7 +1439,7 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, sizeof(*tmpl) + desc_n * sizeof(struct rte_mbuf *) + (!!mprq_en) * - (desc >> mprq_stride_nums) * sizeof(struct mlx5_mprq_buf *), + (desc >> log_mprq_stride_nums) * sizeof(struct mlx5_mprq_buf *), 0, socket); if (!tmpl) { rte_errno = ENOMEM; @@ -1529,37 +1535,37 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, * - MPRQ is enabled. * - The number of descs is more than the number of strides. * - max_rx_pkt_len plus overhead is less than the max size - * of a stride or mprq_stride_size is specified by a user. + * of a stride or log_mprq_stride_size is specified by a user. * Need to make sure that there are enough strides to encap - * the maximum packet size in case mprq_stride_size is set. + * the maximum packet size in case log_mprq_stride_size is set. * Otherwise, enable Rx scatter if necessary. */ - if (mprq_en && desc > (1U << mprq_stride_nums) && + if (mprq_en && desc > RTE_BIT32(log_mprq_stride_nums) && (non_scatter_min_mbuf_size <= - (1U << config->mprq.max_stride_size_n) || - (config->mprq.stride_size_n && + RTE_BIT32(config->mprq.log_max_stride_size) || + (config->mprq.log_stride_size && non_scatter_min_mbuf_size <= mprq_stride_cap))) { /* TODO: Rx scatter isn't supported yet. */ tmpl->rxq.sges_n = 0; /* Trim the number of descs needed. */ - desc >>= mprq_stride_nums; - tmpl->rxq.strd_num_n = config->mprq.stride_num_n ? - config->mprq.stride_num_n : mprq_stride_nums; - tmpl->rxq.strd_sz_n = config->mprq.stride_size_n ? - config->mprq.stride_size_n : mprq_stride_size; + desc >>= log_mprq_stride_nums; + tmpl->rxq.log_strd_num = config->mprq.log_stride_num ? + config->mprq.log_stride_num : log_mprq_stride_nums; + tmpl->rxq.log_strd_sz = config->mprq.log_stride_size ? + config->mprq.log_stride_size : log_mprq_stride_size; tmpl->rxq.strd_shift_en = MLX5_MPRQ_TWO_BYTE_SHIFT; tmpl->rxq.strd_scatter_en = !!(offloads & DEV_RX_OFFLOAD_SCATTER); tmpl->rxq.mprq_max_memcpy_len = RTE_MIN(first_mb_free_size, config->mprq.max_memcpy_len); max_lro_size = RTE_MIN(max_rx_pkt_len, - (1u << tmpl->rxq.strd_num_n) * - (1u << tmpl->rxq.strd_sz_n)); + RTE_BIT32(tmpl->rxq.log_strd_num) * + RTE_BIT32(tmpl->rxq.log_strd_sz)); DRV_LOG(DEBUG, "port %u Rx queue %u: Multi-Packet RQ is enabled" " strd_num_n = %u, strd_sz_n = %u", dev->data->port_id, idx, - tmpl->rxq.strd_num_n, tmpl->rxq.strd_sz_n); + tmpl->rxq.log_strd_num, tmpl->rxq.log_strd_sz); } else if (tmpl->rxq.rxseg_n == 1) { MLX5_ASSERT(max_rx_pkt_len <= first_mb_free_size); tmpl->rxq.sges_n = 0; @@ -1602,15 +1608,15 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, " min_stride_sz = %u, max_stride_sz = %u).", dev->data->port_id, non_scatter_min_mbuf_size, desc, priv->rxqs_n, - config->mprq.stride_size_n ? - (1U << config->mprq.stride_size_n) : - (1U << mprq_stride_size), - config->mprq.stride_num_n ? - (1U << config->mprq.stride_num_n) : - (1U << mprq_stride_nums), + config->mprq.log_stride_size ? + RTE_BIT32(config->mprq.log_stride_size) : + RTE_BIT32(log_mprq_stride_size), + config->mprq.log_stride_num ? + RTE_BIT32(config->mprq.log_stride_num) : + RTE_BIT32(log_mprq_stride_nums), config->mprq.min_rxqs_num, - (1U << config->mprq.min_stride_size_n), - (1U << config->mprq.max_stride_size_n)); + RTE_BIT32(config->mprq.log_min_stride_size), + RTE_BIT32(config->mprq.log_max_stride_size)); DRV_LOG(DEBUG, "port %u maximum number of segments per packet: %u", dev->data->port_id, 1 << tmpl->rxq.sges_n); if (desc % (1 << tmpl->rxq.sges_n)) { diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index ba98277cf8..34611ceaec 100644 --- a/drivers/net/mlx5/mlx5_rxtx.c +++ b/drivers/net/mlx5/mlx5_rxtx.c @@ -465,7 +465,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq) const unsigned int cqe_n = (1 << rxq->cqe_n); const unsigned int sges_n = (1 << rxq->sges_n); const unsigned int elts_n = (1 << rxq->elts_n); - const unsigned int strd_n = (1 << rxq->strd_num_n); + const unsigned int strd_n = RTE_BIT32(rxq->log_strd_num); const unsigned int cqe_cnt = cqe_n - 1; unsigned int cq_ci, used; @@ -566,8 +566,8 @@ mlx5_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, qinfo->conf.offloads = dev->data->dev_conf.rxmode.offloads; qinfo->scattered_rx = dev->data->scattered_rx; qinfo->nb_desc = mlx5_rxq_mprq_enabled(rxq) ? - (1 << rxq->elts_n) * (1 << rxq->strd_num_n) : - (1 << rxq->elts_n); + RTE_BIT32(rxq->elts_n) * RTE_BIT32(rxq->log_strd_num) : + RTE_BIT32(rxq->elts_n); } /** @@ -872,10 +872,10 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) scat = &((volatile struct mlx5_wqe_mprq *) rxq->wqes)[i].dseg; - addr = (uintptr_t)mlx5_mprq_buf_addr(buf, - 1 << rxq->strd_num_n); - byte_count = (1 << rxq->strd_sz_n) * - (1 << rxq->strd_num_n); + addr = (uintptr_t)mlx5_mprq_buf_addr + (buf, RTE_BIT32(rxq->log_strd_num)); + byte_count = RTE_BIT32(rxq->log_strd_sz) * + RTE_BIT32(rxq->log_strd_num); } else { struct rte_mbuf *buf = (*rxq->elts)[i]; @@ -899,7 +899,7 @@ mlx5_rxq_initialize(struct mlx5_rxq_data *rxq) .ai = 0, }; rxq->elts_ci = mlx5_rxq_mprq_enabled(rxq) ? - (wqe_n >> rxq->sges_n) * (1 << rxq->strd_num_n) : 0; + (wqe_n >> rxq->sges_n) * RTE_BIT32(rxq->log_strd_num) : 0; /* Update doorbell counter. */ rxq->rq_ci = wqe_n >> rxq->sges_n; rte_io_wmb(); @@ -1004,7 +1004,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec) const uint16_t cqe_n = 1 << rxq->cqe_n; const uint16_t cqe_mask = cqe_n - 1; const uint16_t wqe_n = 1 << rxq->elts_n; - const uint16_t strd_n = 1 << rxq->strd_num_n; + const uint16_t strd_n = RTE_BIT32(rxq->log_strd_num); struct mlx5_rxq_ctrl *rxq_ctrl = container_of(rxq, struct mlx5_rxq_ctrl, rxq); union { @@ -1651,8 +1651,8 @@ uint16_t mlx5_rx_burst_mprq(void *dpdk_rxq, struct rte_mbuf **pkts, uint16_t pkts_n) { struct mlx5_rxq_data *rxq = dpdk_rxq; - const uint32_t strd_n = 1 << rxq->strd_num_n; - const uint32_t strd_sz = 1 << rxq->strd_sz_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); + const uint32_t strd_sz = RTE_BIT32(rxq->log_strd_sz); const uint32_t cq_mask = (1 << rxq->cqe_n) - 1; const uint32_t wq_mask = (1 << rxq->elts_n) - 1; volatile struct mlx5_cqe *cqe = &(*rxq->cqes)[rxq->cq_ci & cq_mask]; diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h index 7157233e45..237a7faa5c 100644 --- a/drivers/net/mlx5/mlx5_rxtx.h +++ b/drivers/net/mlx5/mlx5_rxtx.h @@ -118,8 +118,8 @@ struct mlx5_rxq_data { unsigned int elts_n:4; /* Log 2 of Mbufs. */ unsigned int rss_hash:1; /* RSS hash result is enabled. */ unsigned int mark:1; /* Marked flow available on the queue. */ - unsigned int strd_num_n:5; /* Log 2 of the number of stride. */ - unsigned int strd_sz_n:4; /* Log 2 of stride size. */ + unsigned int log_strd_num:5; /* Log 2 of the number of stride. */ + unsigned int log_strd_sz:4; /* Log 2 of stride size. */ unsigned int strd_shift_en:1; /* Enable 2bytes shift on a stride. */ unsigned int err_state:2; /* enum mlx5_rxq_err_state. */ unsigned int strd_scatter_en:1; /* Scattered packets from a stride. */ @@ -747,7 +747,7 @@ mlx5_timestamp_set(struct rte_mbuf *mbuf, int offset, static __rte_always_inline void mprq_buf_replace(struct mlx5_rxq_data *rxq, uint16_t rq_idx) { - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); struct mlx5_mprq_buf *rep = rxq->mprq_repl; volatile struct mlx5_wqe_data_seg *wqe = &((volatile struct mlx5_wqe_mprq *)rxq->wqes)[rq_idx].dseg; @@ -805,8 +805,8 @@ static __rte_always_inline enum mlx5_rqx_code mprq_buf_to_pkt(struct mlx5_rxq_data *rxq, struct rte_mbuf *pkt, uint32_t len, struct mlx5_mprq_buf *buf, uint16_t strd_idx, uint16_t strd_cnt) { - const uint32_t strd_n = 1 << rxq->strd_num_n; - const uint16_t strd_sz = 1 << rxq->strd_sz_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); + const uint16_t strd_sz = RTE_BIT32(rxq->log_strd_sz); const uint16_t strd_shift = MLX5_MPRQ_STRIDE_SHIFT_BYTE * rxq->strd_shift_en; const int32_t hdrm_overlap = diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c index 1536a462dc..d156de4ec1 100644 --- a/drivers/net/mlx5/mlx5_rxtx_vec.c +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c @@ -142,7 +142,7 @@ static inline void mlx5_rx_mprq_replenish_bulk_mbuf(struct mlx5_rxq_data *rxq) { const uint16_t wqe_n = 1 << rxq->elts_n; - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); const uint32_t elts_n = wqe_n * strd_n; const uint32_t wqe_mask = elts_n - 1; uint32_t n = elts_n - (rxq->elts_ci - rxq->rq_pi); @@ -191,8 +191,8 @@ rxq_copy_mprq_mbuf_v(struct mlx5_rxq_data *rxq, { const uint16_t wqe_n = 1 << rxq->elts_n; const uint16_t wqe_mask = wqe_n - 1; - const uint16_t strd_sz = 1 << rxq->strd_sz_n; - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint16_t strd_sz = RTE_BIT32(rxq->log_strd_sz); + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); const uint32_t elts_n = wqe_n * strd_n; const uint32_t elts_mask = elts_n - 1; uint32_t elts_idx = rxq->rq_pi & elts_mask; @@ -422,7 +422,7 @@ rxq_burst_mprq_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts, const uint16_t q_n = 1 << rxq->cqe_n; const uint16_t q_mask = q_n - 1; const uint16_t wqe_n = 1 << rxq->elts_n; - const uint32_t strd_n = 1 << rxq->strd_num_n; + const uint32_t strd_n = RTE_BIT32(rxq->log_strd_num); const uint32_t elts_n = wqe_n * strd_n; const uint32_t elts_mask = elts_n - 1; volatile struct mlx5_cqe *cq; -- 2.25.1