From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25C6CA04A4; Wed, 3 Jun 2020 17:07:16 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E191C1D5A9; Wed, 3 Jun 2020 17:06:16 +0200 (CEST) Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05on2075.outbound.protection.outlook.com [40.107.20.75]) by dpdk.org (Postfix) with ESMTP id 67B7E1D57E for ; Wed, 3 Jun 2020 17:06:12 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=d5RM/N36FSz+wf2RoEI7UmylbX23SJ/eCNUV77xAc7nDKyJ8RwkFg1wU6A/dLCV0FGTAlZeyUkVQ4tKWHI7RIiyx04RH5rGNGUz3jKDtD9eV+Qi3Ar+tR5UXS2QgkdCK1qqnk8ALqol/WAUuOt0Z5VnYdvNb5JxY+zvNzN7YXoXZxM+nVadRdm4AsB/YelA6dLvh7InYim4e0WOwfilNGl5oSr1V3K/LPnPd858p94BX87zOngQgiJf4Uj/ZEA3tRW/f1e6/GhlVpP6/fvjfbSZU9PZ3nb+ajsK7wyi8PLBihorhXHoOEHX8uRz4QZ7HDN0TADQqk3/Dhl8aS430wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DXXENj+OPZpIlDkRIJfnyRqUQK2EG9ibJ+qR2lkRwcI=; b=Y4S6YW+nasok1o91xepGirmvTGYDB1+KrDWjadx3ubps7AtN7ray15WyYm/cCqCj6u8R+ly+tOExuHSh1Up/6M1+dxbzrsprVF856v+HYXO3fZBVqIPytHh4E/JURiPRbKlb8rkfH4enOQr/f521HyCSLOvVW71YyzDnzfn7fF5ejFe5iQ5WA3rGbsWvC3oQXV4lnSREKfUFTO2o44USrtGt14JMxirZ6OWPywBJDp5hPhZlAn8pGNJZyxprHt9lNguEoKbxVMyePPie6FTo1yAauylO/0QishhCJ7DfhSXmVZMBhalVSa4neqJ/nzFR0Z6eZ9+AqvPV2b5ZELtmww== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=mellanox.com; dmarc=pass action=none header.from=mellanox.com; dkim=pass header.d=mellanox.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Mellanox.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=DXXENj+OPZpIlDkRIJfnyRqUQK2EG9ibJ+qR2lkRwcI=; b=kucj1kaxpQZWFyNGKwTlPMSRPQD+9/uh0UYwIc4QZBmyJIsTrR66kWW4iO2q0HGuzDGsO1/XQbqZDhN2N4CjCWG8p1KK5biEws09bTraRwtds59jb6khypGObzpKDAOQjpWJlF103uRSVsq/xMcg4R5UvfDGyz6HotXzmZGuZP4= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=mellanox.com; Received: from AM0PR05MB4209.eurprd05.prod.outlook.com (2603:10a6:208:61::22) by AM0PR05MB5235.eurprd05.prod.outlook.com (2603:10a6:208:f5::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.18; Wed, 3 Jun 2020 15:06:11 +0000 Received: from AM0PR05MB4209.eurprd05.prod.outlook.com ([fe80::1068:89a9:41d3:b14a]) by AM0PR05MB4209.eurprd05.prod.outlook.com ([fe80::1068:89a9:41d3:b14a%3]) with mapi id 15.20.3066.018; Wed, 3 Jun 2020 15:06:11 +0000 From: Ophir Munk To: dev@dpdk.org, Matan Azrad , Raslan Darawsheh Cc: Ophir Munk Date: Wed, 3 Jun 2020 15:05:58 +0000 Message-Id: <20200603150602.4686-5-ophirmu@mellanox.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20200603150602.4686-1-ophirmu@mellanox.com> References: <20200603150602.4686-1-ophirmu@mellanox.com> Content-Type: text/plain X-ClientProxiedBy: AM4PR0302CA0033.eurprd03.prod.outlook.com (2603:10a6:205:2::46) To AM0PR05MB4209.eurprd05.prod.outlook.com (2603:10a6:208:61::22) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from mellanox.com (37.142.13.130) by AM4PR0302CA0033.eurprd03.prod.outlook.com (2603:10a6:205:2::46) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.19 via Frontend Transport; Wed, 3 Jun 2020 15:06:11 +0000 X-Mailer: git-send-email 2.8.4 X-Originating-IP: [37.142.13.130] X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 993b2ebc-ec9e-4aba-7e70-08d807cfa70c X-MS-TrafficTypeDiagnostic: AM0PR05MB5235: X-LD-Processed: a652971c-7d2e-4d9b-a6a4-d149256f461b,ExtFwd X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2331; X-Forefront-PRVS: 04238CD941 X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: z3tkvQgn01kspIPDcsdYZPUXfetA1ZHBm0qTTir1j0xuN0elqS3oGz/k6n7Rn4TGcycs26HIdnY8m1mOwMIQFdMtfs2T4N/4phsfYY7tJhQO+wpdP4GNPa9W8zvZQhR1pFx8SGJSQ4rPX/rS598uc0lkWBjFhGGH9g8YQbc1IFWjypd1z5FXX6GNVkeonXkJgRlxc7CA3JUG0jOmAvmrftoON3SR+NpUUHWU1UR5wnQZyGbkOzI4n2y0+m04jWH4kNEfSqVRBMFyJkZNr5ugC4heOJe8kLKSX6YniRZj09eGNeZ/0Xt5T3Dn+720dfMPWt5zJ9/lbSQ409bpx7x3dg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM0PR05MB4209.eurprd05.prod.outlook.com; PTR:; CAT:NONE; SFTY:; SFS:(4636009)(396003)(366004)(136003)(346002)(39860400002)(376002)(26005)(4326008)(186003)(478600001)(316002)(8676002)(16526019)(8886007)(6666004)(2906002)(66476007)(8936002)(66946007)(956004)(66556008)(2616005)(55016002)(107886003)(86362001)(83380400001)(36756003)(6636002)(7696005)(5660300002)(1076003)(52116002)(110136005)(30864003); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: Jxt+BeaSmdhBFat+MFcbtrDkRwwykKG4nCnHVX+ZrmHedRM2Vm1XVtfzUj8JMw01mL+jXC5fG9224LrRtCaPBkcnK96MDwogAFiJT6FuEesMo1cKyfuuz6qhyzUriCe1xRE8Ak3Nt0wAV07+MYfF+2YUs+P4LUCOoTjH2gC4sdx0C1yQkOtyyLH1VfGUu6ftSCphEQxJ9fD+steG6BhD8ZWD4cRdbpVjBh5cOENDGPgeTFDeILDltNb6dYIPqtJAEnrtfrq2+vGJwxe/tKCUYQ1lDexwGXOncFlxNPBiJ+CpWqA0tnCQ7feYdYRE7YVJEq+VyiVANiYCa/1VudJfxu72z5zoGm1pCIYavZHoqR/VKTbLhTV+uj6akVxPUWOkudNyYLlujYIpqfjZmnn5t3w3srmqwONQ6YDPR2HsOoaHYWbGwfZhROETVhMM3mZa2O3ht4e7rrGqHlA5y5OsHlRl/Q+SIqx1CdzBeUfZ2P0HWa3SLm8zv0KO9D6SQ6ww X-OriginatorOrg: Mellanox.com X-MS-Exchange-CrossTenant-Network-Message-Id: 993b2ebc-ec9e-4aba-7e70-08d807cfa70c X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2020 15:06:11.7672 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: a652971c-7d2e-4d9b-a6a4-d149256f461b X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: T+MkiqBbg5DuWwCaz+OwrEuSuFXn9XyoqmQ6TGxG1+aG2gmIIJNdNvbt1GnQgnYdh9kmyZoCY3UaSEpZHucE7g== X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR05MB5235 Subject: [dpdk-dev] [PATCH v1 4/8] net/mlx5: remove attributes dependency on ibv and dv X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Define 'struct mlx5_dev_attr' which is ibv and dv independent. It contains attribute that were originally contained in 'struct ibv_device_attr_ex' and 'struct mlx5dv_context dv_attr'. Add a new API mlx5_os_get_dev_attr() which fills in the new defined struct. Signed-off-by: Ophir Munk Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 63 ++++++++++++++++++++++++++++++++++++++++ drivers/net/mlx5/mlx5.c | 12 ++++---- drivers/net/mlx5/mlx5.h | 27 +++++++++++++++-- drivers/net/mlx5/mlx5_ethdev.c | 6 ++-- drivers/net/mlx5/mlx5_rxq.c | 4 +-- drivers/net/mlx5/mlx5_txq.c | 18 ++++++------ 6 files changed, 108 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 9443239..85dcf49 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -85,3 +85,66 @@ mlx5_os_get_ctx_device_path(void *ctx) return ((struct ibv_context *)ctx)->device->ibdev_path; } + +/** + * Get mlx5 device attributes. The glue function query_device_ex() is called + * with out parameter of type 'struct ibv_device_attr_ex *'. Then fill in mlx5 + * device attributes from the glue out parameter. + * + * @param dev + * Pointer to ibv context. + * + * @param device_attr + * Pointer to mlx5 device attributes. + * + * @return + * 0 on success, non zero error number otherwise + */ +int +mlx5_os_get_dev_attr(void *ctx, struct mlx5_dev_attr *device_attr) +{ + int err; + struct ibv_device_attr_ex attr_ex; + memset(device_attr, 0, sizeof(*device_attr)); + err = mlx5_glue->query_device_ex(ctx, NULL, &attr_ex); + if (err) + return err; + + device_attr->device_cap_flags_ex = attr_ex.device_cap_flags_ex; + device_attr->max_qp_wr = attr_ex.orig_attr.max_qp_wr; + device_attr->max_sge = attr_ex.orig_attr.max_sge; + device_attr->max_cq = attr_ex.orig_attr.max_cq; + device_attr->max_qp = attr_ex.orig_attr.max_qp; + device_attr->raw_packet_caps = attr_ex.raw_packet_caps; + device_attr->max_rwq_indirection_table_size = + attr_ex.rss_caps.max_rwq_indirection_table_size; + device_attr->max_tso = attr_ex.tso_caps.max_tso; + device_attr->tso_supported_qpts = attr_ex.tso_caps.supported_qpts; + + struct mlx5dv_context dv_attr = { .comp_mask = 0 }; + err = mlx5_glue->dv_query_device(ctx, &dv_attr); + if (err) + return err; + + device_attr->flags = dv_attr.flags; + device_attr->comp_mask = dv_attr.comp_mask; +#ifdef HAVE_IBV_MLX5_MOD_SWP + device_attr->sw_parsing_offloads = + dv_attr.sw_parsing_caps.sw_parsing_offloads; +#endif + device_attr->min_single_stride_log_num_of_bytes = + dv_attr.striding_rq_caps.min_single_stride_log_num_of_bytes; + device_attr->max_single_stride_log_num_of_bytes = + dv_attr.striding_rq_caps.max_single_stride_log_num_of_bytes; + device_attr->min_single_wqe_log_num_of_strides = + dv_attr.striding_rq_caps.min_single_wqe_log_num_of_strides; + device_attr->max_single_wqe_log_num_of_strides = + dv_attr.striding_rq_caps.max_single_wqe_log_num_of_strides; + device_attr->stride_supported_qpts = + dv_attr.striding_rq_caps.supported_qpts; +#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT + device_attr->tunnel_offloads_caps = dv_attr.tunnel_offloads_caps; +#endif + + return err; +} diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 95a34d1..0fa8742 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -825,9 +825,9 @@ mlx5_alloc_shared_ibctx(const struct mlx5_dev_spawn_data *spawn, goto error; DRV_LOG(DEBUG, "DevX is NOT supported"); } - err = mlx5_glue->query_device_ex(sh->ctx, NULL, &sh->device_attr); + err = mlx5_os_get_dev_attr(sh->ctx, &sh->device_attr); if (err) { - DRV_LOG(DEBUG, "ibv_query_device_ex() failed"); + DRV_LOG(DEBUG, "mlx5_os_get_dev_attr() failed"); goto error; } sh->refcnt = 1; @@ -2799,7 +2799,7 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } #endif config.ind_table_max_size = - sh->device_attr.rss_caps.max_rwq_indirection_table_size; + sh->device_attr.max_rwq_indirection_table_size; /* * Remove this check once DPDK supports larger/variable * indirection tables. @@ -2828,11 +2828,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev, } else if (config.hw_padding) { DRV_LOG(DEBUG, "Rx end alignment padding is enabled"); } - config.tso = (sh->device_attr.tso_caps.max_tso > 0 && - (sh->device_attr.tso_caps.supported_qpts & + config.tso = (sh->device_attr.max_tso > 0 && + (sh->device_attr.tso_supported_qpts & (1 << IBV_QPT_RAW_PACKET))); if (config.tso) - config.tso_max_payload_sz = sh->device_attr.tso_caps.max_tso; + config.tso_max_payload_sz = sh->device_attr.max_tso; /* * MPW is disabled by default, while the Enhanced MPW is enabled * by default. diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h index 30678aa..478ebef 100644 --- a/drivers/net/mlx5/mlx5.h +++ b/drivers/net/mlx5/mlx5.h @@ -43,7 +43,6 @@ #include "mlx5_utils.h" #include "mlx5_autoconf.h" - enum mlx5_ipool_index { #ifdef HAVE_IBV_FLOW_DV_SUPPORT MLX5_IPOOL_DECAP_ENCAP = 0, /* Pool for encap/decap resource. */ @@ -72,6 +71,29 @@ enum mlx5_reclaim_mem_mode { MLX5_RCM_AGGR, /* Reclaim PMD and rdma-core level. */ }; +/* Device attributes used in mlx5 PMD */ +struct mlx5_dev_attr { + uint64_t device_cap_flags_ex; + int max_qp_wr; + int max_sge; + int max_cq; + int max_qp; + uint32_t raw_packet_caps; + uint32_t max_rwq_indirection_table_size; + uint32_t max_tso; + uint32_t tso_supported_qpts; + uint64_t flags; + uint64_t comp_mask; + uint32_t sw_parsing_offloads; + uint32_t min_single_stride_log_num_of_bytes; + uint32_t max_single_stride_log_num_of_bytes; + uint32_t min_single_wqe_log_num_of_strides; + uint32_t max_single_wqe_log_num_of_strides; + uint32_t stride_supported_qpts; + uint32_t tunnel_offloads_caps; + char fw_ver[64]; +}; + /** Key string for IPC. */ #define MLX5_MP_NAME "net_mlx5_mp" @@ -499,7 +521,7 @@ struct mlx5_dev_ctx_shared { uint32_t tdn; /* Transport Domain number. */ char ibdev_name[IBV_SYSFS_NAME_MAX]; /* IB device name. */ char ibdev_path[IBV_SYSFS_PATH_MAX]; /* IB device path for secondary */ - struct ibv_device_attr_ex device_attr; /* Device properties. */ + struct mlx5_dev_attr device_attr; /* Device properties. */ LIST_ENTRY(mlx5_dev_ctx_shared) mem_event_cb; /**< Called by memory event callback. */ struct mlx5_mr_share_cache share_cache; @@ -856,5 +878,6 @@ void mlx5_flow_meter_detach(struct mlx5_flow_meter *fm); /* mlx5_os.c */ const char *mlx5_os_get_ctx_device_name(void *ctx); const char *mlx5_os_get_ctx_device_path(void *ctx); +int mlx5_os_get_dev_attr(void *ctx, struct mlx5_dev_attr *dev_attr); #endif /* RTE_PMD_MLX5_H_ */ diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c index 6919911..6b8b303 100644 --- a/drivers/net/mlx5/mlx5_ethdev.c +++ b/drivers/net/mlx5/mlx5_ethdev.c @@ -626,8 +626,8 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info) * Since we need one CQ per QP, the limit is the minimum number * between the two values. */ - max = RTE_MIN(priv->sh->device_attr.orig_attr.max_cq, - priv->sh->device_attr.orig_attr.max_qp); + max = RTE_MIN(priv->sh->device_attr.max_cq, + priv->sh->device_attr.max_qp); /* max_rx_queues is uint16_t. */ max = RTE_MIN(max, (unsigned int)UINT16_MAX); info->max_rx_queues = max; @@ -736,7 +736,7 @@ mlx5_read_clock(struct rte_eth_dev *dev, uint64_t *clock) int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size) { struct mlx5_priv *priv = dev->data->dev_private; - struct ibv_device_attr *attr = &priv->sh->device_attr.orig_attr; + struct mlx5_dev_attr *attr = &priv->sh->device_attr; size_t size = strnlen(attr->fw_ver, sizeof(attr->fw_ver)) + 1; if (fw_size < size) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 0b0abe1..f018553 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -1405,9 +1405,9 @@ mlx5_rxq_obj_new(struct rte_eth_dev *dev, uint16_t idx, goto error; } DRV_LOG(DEBUG, "port %u device_attr.max_qp_wr is %d", - dev->data->port_id, priv->sh->device_attr.orig_attr.max_qp_wr); + dev->data->port_id, priv->sh->device_attr.max_qp_wr); DRV_LOG(DEBUG, "port %u device_attr.max_sge is %d", - dev->data->port_id, priv->sh->device_attr.orig_attr.max_sge); + dev->data->port_id, priv->sh->device_attr.max_sge); /* Allocate door-bell for types created with DevX. */ if (tmpl->type != MLX5_RXQ_OBJ_TYPE_IBV) { struct mlx5_devx_dbr_page *dbr_page; diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 2047a9a..f7b548f 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -645,9 +645,9 @@ mlx5_txq_obj_new(struct rte_eth_dev *dev, uint16_t idx, .cap = { /* Max number of outstanding WRs. */ .max_send_wr = - ((priv->sh->device_attr.orig_attr.max_qp_wr < + ((priv->sh->device_attr.max_qp_wr < desc) ? - priv->sh->device_attr.orig_attr.max_qp_wr : + priv->sh->device_attr.max_qp_wr : desc), /* * Max number of scatter/gather elements in a WR, @@ -948,7 +948,7 @@ txq_calc_inline_max(struct mlx5_txq_ctrl *txq_ctrl) struct mlx5_priv *priv = txq_ctrl->priv; unsigned int wqe_size; - wqe_size = priv->sh->device_attr.orig_attr.max_qp_wr / desc; + wqe_size = priv->sh->device_attr.max_qp_wr / desc; if (!wqe_size) return 0; /* @@ -1203,7 +1203,7 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl) " Tx queue size (%d)", txq_ctrl->txq.inlen_mode, max_inline, priv->dev_data->port_id, - priv->sh->device_attr.orig_attr.max_qp_wr); + priv->sh->device_attr.max_qp_wr); goto error; } if (txq_ctrl->txq.inlen_send > max_inline && @@ -1215,7 +1215,7 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl) " Tx queue size (%d)", txq_ctrl->txq.inlen_send, max_inline, priv->dev_data->port_id, - priv->sh->device_attr.orig_attr.max_qp_wr); + priv->sh->device_attr.max_qp_wr); goto error; } if (txq_ctrl->txq.inlen_empw > max_inline && @@ -1227,7 +1227,7 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl) " Tx queue size (%d)", txq_ctrl->txq.inlen_empw, max_inline, priv->dev_data->port_id, - priv->sh->device_attr.orig_attr.max_qp_wr); + priv->sh->device_attr.max_qp_wr); goto error; } if (txq_ctrl->txq.tso_en && max_inline < MLX5_MAX_TSO_HEADER) { @@ -1237,7 +1237,7 @@ txq_adjust_params(struct mlx5_txq_ctrl *txq_ctrl) " Tx queue size (%d)", MLX5_MAX_TSO_HEADER, max_inline, priv->dev_data->port_id, - priv->sh->device_attr.orig_attr.max_qp_wr); + priv->sh->device_attr.max_qp_wr); goto error; } if (txq_ctrl->txq.inlen_send > max_inline) { @@ -1322,12 +1322,12 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, if (txq_adjust_params(tmpl)) goto error; if (txq_calc_wqebb_cnt(tmpl) > - priv->sh->device_attr.orig_attr.max_qp_wr) { + priv->sh->device_attr.max_qp_wr) { DRV_LOG(ERR, "port %u Tx WQEBB count (%d) exceeds the limit (%d)," " try smaller queue size", dev->data->port_id, txq_calc_wqebb_cnt(tmpl), - priv->sh->device_attr.orig_attr.max_qp_wr); + priv->sh->device_attr.max_qp_wr); rte_errno = ENOMEM; goto error; } -- 2.8.4