From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6BED2A0C47; Wed, 3 Nov 2021 19:36:08 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E009440E5A; Wed, 3 Nov 2021 19:35:47 +0100 (CET) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2084.outbound.protection.outlook.com [40.107.92.84]) by mails.dpdk.org (Postfix) with ESMTP id A361941137; Wed, 3 Nov 2021 19:35:45 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Nadh5AlekP53TkhW50u1d6VrfOcmZzjQLAqL7lLUOUwl1dO1in6Vt90qlJeNrsvLaLjnd25jTnLXidnCSoeKYWFthlB4gqQNTbZROpv84r3y3ENy54YK5a+SDI7BAFfL/UKi4Bg5gOyODHX8EVgHqqBa7GIFKLXx2BejRh3JdimO4lMh0kwE3Q4iyRiYf7+kSVLF6/QdKxI60p6Z17avwdO/MmqW5yOR/RP7EXyO5KSnE7A0ozCD7WpZnjvfov3aJS6F36WP8iE7+hrbJed37Qoh5RFvRRBHwQ8xz2zdWgBkKJ8WtEHfkpFsDNFgxFaZvBCA+HgplDhkPR2VQDkE9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=RHJhjJfztNhSdtynIRCy3x5y+iVZVu/Y9tkf6A03nOU=; b=bp1ON9izusBOlVnSxGVHTsz59LpBCSXdbAbtdD3jtMfyAFL7IF85soL19/+9sA4UK5yBcTjhS7VrCRi5AdSyIShNVhsc0BZNgrHL+Rix7+d8SNCs6UT8C4A9ItZq4OF9iVgCiZSAtwY2wHPVTH/wyIAIxzJNx0KAIOr5cu/1AGdcGC6k0mqQPSdV5V9gUBP0ROKwp0q5V591xbTcZ9cXKXZSpNjG8/ZpJlf68qPq8akfqmY/b6SefuMdinFq7T5L245Ypcpr9tX/EmJLjLSNmnkMGJDkPzPLYb7nj3AKZh13e34x8SwyUkrgKwRgogXBUrgeXdeZcJ9Z4PIPk3C/5Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=RHJhjJfztNhSdtynIRCy3x5y+iVZVu/Y9tkf6A03nOU=; b=izSEQR9/EcTRJNxvDBN+9skN2EvTclP9IYjzE9K/3jYyOJ2Q+3/OhsrkAX5j0wD2pHlSm61Hss3mz7+Wmmh91G1LEMgNCuopAdn9vGWcseiDjKFJQUJrVlb4kZoi2aC3jU9alu8mNR5OrzZW6BG9DIXXYBMudF6bpt4aGbuGFi1NcY5CHZS0ppcp7SK377sLnjtR+KCDCL0Y1st5OZ4IUfIrC4rX/LqwVFoYCSEZwO2m9i1K157fIfplDPwUj7rSsUbe+Y+cSB5SJj4fh/wLu6AqbX7RGQ+AXokHY9i4g8N+VSj89WDnPA/+xmLAdkwkwc2MHJ89+2jh5PMrPSioVg== Received: from DM5PR07CA0145.namprd07.prod.outlook.com (2603:10b6:3:ee::11) by SA0PR12MB4493.namprd12.prod.outlook.com (2603:10b6:806:72::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.10; Wed, 3 Nov 2021 18:35:44 +0000 Received: from DM6NAM11FT062.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ee:cafe::80) by DM5PR07CA0145.outlook.office365.com (2603:10b6:3:ee::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.11 via Frontend Transport; Wed, 3 Nov 2021 18:35:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT062.mail.protection.outlook.com (10.13.173.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4669.10 via Frontend Transport; Wed, 3 Nov 2021 18:35:44 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Wed, 3 Nov 2021 18:35:41 +0000 From: To: CC: Matan Azrad , Thomas Monjalon , Michael Baum , , "Viacheslav Ovsiienko" Date: Wed, 3 Nov 2021 20:35:11 +0200 Message-ID: <20211103183513.104503-5-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211103183513.104503-1-michaelba@nvidia.com> References: <20211103183513.104503-1-michaelba@nvidia.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Content-Type: text/plain X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e8ec9ff7-d192-48dd-5b25-08d99ef8bf0c X-MS-TrafficTypeDiagnostic: SA0PR12MB4493: X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1728; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: bZ7cLZW2qtO1HJMLnJ8LKtJbOGEM48F0gRAz9lSCGc1tmrCP6nbIUgw+Hsn+MsD5nP4uFyT1DUijdmEs5gB+dAnAyjBypNRBxI3gC65c1yC8TGbRlxoEu/M9ZDvmBuIrKQedK+EQDAdeLI6V73N3B6H48aeOY6hRIMjL+sOK7qJU6lxbN1H7s7KZHzSs/zsWF7hVY+TvHevHywCZzfEOeH9TdqJVgJPtDDo26t8SZuUbPx0veQ1CJvIYAbMe2p5skoWDkhhaICg3WUvk1gnWdOca1ipsYJWNm0p9Su4ZHtbgujzzB3QhVl1mcIA7hRTDFNt3zLvPiuRnHzQSIvWhjFLk8m4J1WrOAJaKz+UPWlgfNzJ1HmriGWOxKaZzPsczbq++13wD04PNDlyBFRdTlsUZAwJJ+rmx6C1jqOMrUW6ruTNQ8BIMGIV54n4C/pS2oy34ncCj4vnKjAby9bcDJf69g3IfBv3Ta5f0tHn2ucPtRMjTnhvNMvzJBBw3CUE2icuBaKQwU9e88ElEKJyAsmCvZ6JuaOsYpiWv3N0Xx6B9VQc60zPpVvM+OO2r/ci6XkqJd0sHB3evXf5jnKS/Pq3k5u0zrAZVJNQphAI4y3Q6+zfvl7LZdiYnkqKr/oVqaeeK+NYyh/WIYAnJN2SixLnnfm2xc9U9YigmETBZz2VTqv9Sgzzw6GK5i2R8eHPFHI4MG4lEDpb4ZWvYaz4hEA== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(86362001)(5660300002)(8676002)(2616005)(2876002)(107886003)(54906003)(186003)(16526019)(36756003)(426003)(6666004)(508600001)(6286002)(356005)(26005)(82310400003)(7636003)(336012)(316002)(47076005)(6916009)(36860700001)(4326008)(1076003)(70206006)(7696005)(70586007)(55016002)(83380400001)(8936002)(36906005)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Nov 2021 18:35:44.2542 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e8ec9ff7-d192-48dd-5b25-08d99ef8bf0c X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT062.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4493 Subject: [dpdk-dev] [PATCH 4/6] common/mlx5: fix doorbell mapping configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Michael Baum UAR mapping type can be affected by the devarg tx_db_nc, which can cause setting the environment variable MLX5_SHUT_UP_BF. So, the MLX5_SHUT_UP_BF value and the UAR mapping parameter affect the UAR cache mode. Wrongly, the devarg was considered for the MLX5_SHUT_UP_BF but not for the UAR mapping parameter in all the drivers except the net. Take the tx_db_nc devarg into account for all the drivers. Fixes: ca1418ce3910 ("common/mlx5: share device context object") Cc: stable@dpdk.org Signed-off-by: Michael Baum Reviewed-by: Viacheslav Ovsiienko Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common.c | 52 ++++++++++++++------------- drivers/common/mlx5/mlx5_common.h | 5 +-- drivers/compress/mlx5/mlx5_compress.c | 2 +- drivers/crypto/mlx5/mlx5_crypto.c | 2 +- drivers/regex/mlx5/mlx5_regex.c | 2 +- drivers/vdpa/mlx5/mlx5_vdpa_event.c | 2 +- 6 files changed, 35 insertions(+), 30 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common.c b/drivers/common/mlx5/mlx5_common.c index 7f92e3b2cc..7bdc550b36 100644 --- a/drivers/common/mlx5/mlx5_common.c +++ b/drivers/common/mlx5/mlx5_common.c @@ -934,30 +934,25 @@ RTE_INIT_PRIO(mlx5_is_haswell_broadwell_cpu, LOG) /** * Allocate the User Access Region with DevX on specified device. + * This routine handles the following UAR allocation issues: * - * @param [in] ctx - * Infiniband device context to perform allocation on. - * @param [in] mapping - * MLX5DV_UAR_ALLOC_TYPE_BF - allocate as cached memory with write-combining - * attributes (if supported by the host), the - * writes to the UAR registers must be followed - * by write memory barrier. - * MLX5DV_UAR_ALLOC_TYPE_NC - allocate as non-cached memory, all writes are - * promoted to the registers immediately, no - * memory barriers needed. - * mapping < 0 - the first attempt is performed with MLX5DV_UAR_ALLOC_TYPE_NC, - * if this fails the next attempt with MLX5DV_UAR_ALLOC_TYPE_BF - * is performed. The drivers specifying negative values should - * always provide the write memory barrier operation after UAR - * register writings. - * If there is no definitions for the MLX5DV_UAR_ALLOC_TYPE_xx (older rdma - * library headers), the caller can specify 0. + * - tries to allocate the UAR with the most appropriate memory mapping + * type from the ones supported by the host. + * + * - tries to allocate the UAR with non-NULL base address OFED 5.0.x and + * Upstream rdma_core before v29 returned the NULL as UAR base address + * if UAR was not the first object in the UAR page. + * It caused the PMD failure and we should try to get another UAR till + * we get the first one with non-NULL base address returned. + * + * @param [in] cdev + * Pointer to mlx5 device structure to perform allocation on its context. * * @return * UAR object pointer on success, NULL otherwise and rte_errno is set. */ void * -mlx5_devx_alloc_uar(void *ctx, int mapping) +mlx5_devx_alloc_uar(struct mlx5_common_device *cdev) { void *uar; uint32_t retry, uar_mapping; @@ -966,26 +961,35 @@ mlx5_devx_alloc_uar(void *ctx, int mapping) for (retry = 0; retry < MLX5_ALLOC_UAR_RETRY; ++retry) { #ifdef MLX5DV_UAR_ALLOC_TYPE_NC /* Control the mapping type according to the settings. */ - uar_mapping = (mapping < 0) ? - MLX5DV_UAR_ALLOC_TYPE_NC : mapping; + uar_mapping = (cdev->config.dbnc == MLX5_TXDB_NCACHED) ? + MLX5DV_UAR_ALLOC_TYPE_NC : MLX5DV_UAR_ALLOC_TYPE_BF; #else /* * It seems we have no way to control the memory mapping type * for the UAR, the default "Write-Combining" type is supposed. */ uar_mapping = 0; - RTE_SET_USED(mapping); #endif - uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping); + uar = mlx5_glue->devx_alloc_uar(cdev->ctx, uar_mapping); #ifdef MLX5DV_UAR_ALLOC_TYPE_NC - if (!uar && mapping < 0) { + if (!uar && uar_mapping == MLX5DV_UAR_ALLOC_TYPE_BF) { + /* + * In some environments like virtual machine the + * Write Combining mapped might be not supported and + * UAR allocation fails. We tried "Non-Cached" mapping + * for the case. + */ + DRV_LOG(DEBUG, "Failed to allocate DevX UAR (BF)"); + uar_mapping = MLX5DV_UAR_ALLOC_TYPE_NC; + uar = mlx5_glue->devx_alloc_uar(cdev->ctx, uar_mapping); + } else if (!uar && uar_mapping == MLX5DV_UAR_ALLOC_TYPE_NC) { /* * If Verbs/kernel does not support "Non-Cached" * try the "Write-Combining". */ DRV_LOG(DEBUG, "Failed to allocate DevX UAR (NC)"); uar_mapping = MLX5DV_UAR_ALLOC_TYPE_BF; - uar = mlx5_glue->devx_alloc_uar(ctx, uar_mapping); + uar = mlx5_glue->devx_alloc_uar(cdev->ctx, uar_mapping); } #endif if (!uar) { diff --git a/drivers/common/mlx5/mlx5_common.h b/drivers/common/mlx5/mlx5_common.h index 744c6a72b3..7febae9cdf 100644 --- a/drivers/common/mlx5/mlx5_common.h +++ b/drivers/common/mlx5/mlx5_common.h @@ -284,8 +284,6 @@ __rte_internal void mlx5_translate_port_name(const char *port_name_in, struct mlx5_switch_info *port_info_out); void mlx5_glue_constructor(void); -__rte_internal -void *mlx5_devx_alloc_uar(void *ctx, int mapping); extern uint8_t haswell_broadwell_cpu; __rte_internal @@ -417,6 +415,9 @@ void mlx5_dev_mempool_unregister(struct mlx5_common_device *cdev, struct rte_mempool *mp); +__rte_internal +void *mlx5_devx_alloc_uar(struct mlx5_common_device *cdev); + /* mlx5_common_mr.c */ __rte_internal diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c index c4081c5f7d..df60b05ab3 100644 --- a/drivers/compress/mlx5/mlx5_compress.c +++ b/drivers/compress/mlx5/mlx5_compress.c @@ -690,7 +690,7 @@ mlx5_compress_uar_release(struct mlx5_compress_priv *priv) static int mlx5_compress_uar_prepare(struct mlx5_compress_priv *priv) { - priv->uar = mlx5_devx_alloc_uar(priv->cdev->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->cdev); if (priv->uar == NULL || mlx5_os_get_devx_uar_reg_addr(priv->uar) == NULL) { rte_errno = errno; diff --git a/drivers/crypto/mlx5/mlx5_crypto.c b/drivers/crypto/mlx5/mlx5_crypto.c index f9fd0d498e..33d797a6a0 100644 --- a/drivers/crypto/mlx5/mlx5_crypto.c +++ b/drivers/crypto/mlx5/mlx5_crypto.c @@ -731,7 +731,7 @@ mlx5_crypto_uar_release(struct mlx5_crypto_priv *priv) static int mlx5_crypto_uar_prepare(struct mlx5_crypto_priv *priv) { - priv->uar = mlx5_devx_alloc_uar(priv->cdev->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->cdev); if (priv->uar) priv->uar_addr = mlx5_os_get_devx_uar_reg_addr(priv->uar); if (priv->uar == NULL || priv->uar_addr == NULL) { diff --git a/drivers/regex/mlx5/mlx5_regex.c b/drivers/regex/mlx5/mlx5_regex.c index b8a513e1fa..d632252794 100644 --- a/drivers/regex/mlx5/mlx5_regex.c +++ b/drivers/regex/mlx5/mlx5_regex.c @@ -138,7 +138,7 @@ mlx5_regex_dev_probe(struct mlx5_common_device *cdev) * registers writings, it is safe to allocate UAR with any * memory mapping type. */ - priv->uar = mlx5_devx_alloc_uar(priv->cdev->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->cdev); if (!priv->uar) { DRV_LOG(ERR, "can't allocate uar."); rte_errno = ENOMEM; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 042d22777f..21738bdfff 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -61,7 +61,7 @@ mlx5_vdpa_event_qp_global_prepare(struct mlx5_vdpa_priv *priv) * registers writings, it is safe to allocate UAR with any * memory mapping type. */ - priv->uar = mlx5_devx_alloc_uar(priv->cdev->ctx, -1); + priv->uar = mlx5_devx_alloc_uar(priv->cdev); if (!priv->uar) { rte_errno = errno; DRV_LOG(ERR, "Failed to allocate UAR."); -- 2.25.1