From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 381F7A034C; Wed, 21 Dec 2022 11:30:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CCC7C42D21; Wed, 21 Dec 2022 11:30:46 +0100 (CET) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2071.outbound.protection.outlook.com [40.107.223.71]) by mails.dpdk.org (Postfix) with ESMTP id 943D942D2E for ; Wed, 21 Dec 2022 11:30:44 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FuTwQ604aqNf8hpepQ+RAHubn3HnskfFFh2PunlOSdVZoJ07Re9pOK5q4Uk2VDnLZTbitgKVbKk2WbXo+Eia4SFOiIHy7pj5Nnlv4p+wLlbzABcrRpZJCDgsFQaXNZC85kCdu2F5Hq1dWg98ZxtnECD8KAhlcKoD9PJSvfM/oAAzc5GB0ZjzWV7UiovAKxAfh9uUUys7raFeR+mo8wdejeWjwLISs6bgxdIc1NX9qGIqyhcKrBBWqx05gj5uwjvjY3C3Nl0sQvqBF/yna3hw3S67Np2jsjOXzlolzrsxVaaDj+2AaRC2fQhdQ3TIWUaP97o60OfvNJCUDeo6vYifNw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=baBUgdoHs4c/qUR4AmazFcUPx9uzWDhC16hy8yIsCRs=; b=LRQPrWI3Ly7euzs9YrPQ9TIieyK0/TM++z/LDBz0yDpTYKeu7nKnQhhAr9/N1eGmha+k/keL+KVGbx+c6c6kvjSQVSTcwnwKuZtLmwSklg/JcoI7tYjr/q5zBljskngotWoMRP6JCZluKX9IskO9KESH/L50S28ReOjmZ+VOy2+omxlh5W0FFxbb0NBkKEiKLqdR20a98S7hQ1R/uG0PR1MaXOwKjWDESkwV6/VYpgHEavLnyEkfMvVaQtwCOq3rq+qLBEqqfR+mQhbTACkdbKsEK2Yk9G29olyE85Lqig96Tv7Fg7Wb++fBzvB2G8ehwrvFzzW7DgNDBogu8soLHA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=baBUgdoHs4c/qUR4AmazFcUPx9uzWDhC16hy8yIsCRs=; b=cNKAH101y7qEj0HnWKnLBb9ld0etY7n7ItAZtu1cA4VhanHE8YE5C6MlmeXyrTbxMswbDf8GoEBQjvCzBegf+tccXKRfGkLeiX27N0UA33Twbq60A2baWEuMkNLSflcge3r23+wHLMadf2KsJWS/oVKHikx0Dc/3m5rH3Nie9nvYrjwbs9ckkbAzqkzxbUm1ON7wu80JmlzcRxQDgV5hhZj9CML4Zkf1gQENiueZ1t/N0XdCMubcbbKc/OLPtzkpCxG4Z2tG4GGWg3wssb1xLeaNZc8WCjw69jNUbgx8/sFfqU2mV9xfyk+ctpvZLkNsGTluSM/lALOt/JfECfpLwA== Received: from BL0PR05CA0029.namprd05.prod.outlook.com (2603:10b6:208:91::39) by SN7PR12MB6909.namprd12.prod.outlook.com (2603:10b6:806:263::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5924.16; Wed, 21 Dec 2022 10:30:42 +0000 Received: from BL02EPF00010208.namprd05.prod.outlook.com (2603:10b6:208:91:cafe::e3) by BL0PR05CA0029.outlook.office365.com (2603:10b6:208:91::39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.5 via Frontend Transport; Wed, 21 Dec 2022 10:30:42 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BL02EPF00010208.mail.protection.outlook.com (10.167.241.199) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.5944.8 via Frontend Transport; Wed, 21 Dec 2022 10:30:42 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 21 Dec 2022 02:30:26 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Wed, 21 Dec 2022 02:30:24 -0800 From: Jiawei Wang To: , , , "Matan Azrad" CC: , Subject: [RFC 5/5] drivers: enhance the Tx queue affinity Date: Wed, 21 Dec 2022 12:29:34 +0200 Message-ID: <20221221102934.13822-6-jiaweiw@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20221221102934.13822-1-jiaweiw@nvidia.com> References: <20221221102934.13822-1-jiaweiw@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BL02EPF00010208:EE_|SN7PR12MB6909:EE_ X-MS-Office365-Filtering-Correlation-Id: 1bec3df6-8510-429f-f057-08dae33e69a3 X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NRhUqBbfMxttm7dGx5PRyKkEm+UHcdonJXK9NiMxYn+48O3ugVxwj1hK9X+AOazHMaTX4GLmbXU9K0AV7ZiutE3WC4aFj9r+1kQb1xefCYBV9QaKeNIMXitWrS5pbnJhpzble7c/XMzdFf/fIrvkRhlTTf6ELoBmyTRtInEbnfbqBkwLrTSQs0L6jLTte8klnF+R9X2k/dbkVot7YWypetoML5bcSr7sGEZ+3L18DGbAMmOKMExTzbElWmZ0JXEMnQ77q5MaUiJVxw9ytLrg61v58tB8P/H5PueXSp/8L2De/mg1qMi8BdGvOF3GGU83dgQvL7qHI6f1LIqrpsdBEouaAFJFIwAnDP4TfYl+8P2ILVfhyt13kgQh3nvUFuJ0uveWY0odeLHSZ+nBPn5H93rgA0ViQLoippuVirBsTCqKjG6DiHcZXsUD1AouaLHCjvlDG5LZKftvQduBJyKAhZrofMZ7eLpfNNa1ac99JiyAtGTW186+jrTcwm5qoZ8WTuuTowmCO8GXqtmVxqkqXQjQDBuCDjptcHjE03uwSOUZnEW+090XyupNeSpcgTkeknnp+bz1mWeWA4ySM0+enq5QK/jUl5UTYcBNlx0A0EATXOH1w5SVI4aXRUP9TyOsqfM8uLuq61wW5UPYO/rvtn6gZXKQ0LLoNiVu0qtP1RmYu+HJNNxOhXcmUvEwZ0SaPo9aeDgAK/Joh4LcL8JwIg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230022)(4636009)(39860400002)(396003)(346002)(376002)(136003)(451199015)(36840700001)(40470700004)(46966006)(6636002)(36860700001)(40460700003)(110136005)(336012)(54906003)(4326008)(41300700001)(86362001)(83380400001)(55016003)(36756003)(426003)(8936002)(47076005)(5660300002)(2906002)(40480700001)(8676002)(1076003)(316002)(2616005)(70206006)(70586007)(16526019)(82310400005)(26005)(6666004)(107886003)(82740400003)(478600001)(6286002)(186003)(356005)(7696005)(7636003); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Dec 2022 10:30:42.4144 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1bec3df6-8510-429f-f057-08dae33e69a3 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BL02EPF00010208.namprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6909 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Previous patch support the tx affinity configuration in the Tx queue API, it supports to set the affinity value on each Queue. This patch updates TIS creation with tx_affinity value of Tx queue , TIS index 1 goes to port 1, TIS index 2 goes to port 2, and TIS index 0 is reserved for default HWS hash mode. Signed-off-by: Jiawei Wang --- drivers/common/mlx5/mlx5_prm.h | 8 ------- drivers/net/mlx5/mlx5.c | 43 +++++++++++++++------------------- drivers/net/mlx5/mlx5_devx.c | 21 ++++++++++------- drivers/net/mlx5/mlx5_tx.h | 1 + drivers/net/mlx5/mlx5_txq.c | 9 +++++++ 5 files changed, 42 insertions(+), 40 deletions(-) diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h index 9098b0fe0b..778c97b059 100644 --- a/drivers/common/mlx5/mlx5_prm.h +++ b/drivers/common/mlx5/mlx5_prm.h @@ -2362,14 +2362,6 @@ struct mlx5_ifc_query_nic_vport_context_in_bits { u8 reserved_at_68[0x18]; }; -/* - * lag_tx_port_affinity: 0 auto-selection, 1 PF1, 2 PF2 vice versa. - * Each TIS binds to one PF by setting lag_tx_port_affinity (>0). - * Once LAG enabled, we create multiple TISs and bind each one to - * different PFs, then TIS[i] gets affinity i+1 and goes to PF i+1. - */ -#define MLX5_IFC_LAG_MAP_TIS_AFFINITY(index, num) ((num) ? \ - (index) % (num) + 1 : 0) struct mlx5_ifc_tisc_bits { u8 strict_lag_tx_port_affinity[0x1]; u8 reserved_at_1[0x3]; diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index fe9897f83d..e547fa0219 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1172,9 +1172,9 @@ mlx5_dev_ctx_shared_mempool_subscribe(struct rte_eth_dev *dev) static int mlx5_setup_tis(struct mlx5_dev_ctx_shared *sh) { - int i; struct mlx5_devx_lag_context lag_ctx = { 0 }; struct mlx5_devx_tis_attr tis_attr = { 0 }; + int i; tis_attr.transport_domain = sh->td->id; if (sh->bond.n_port) { @@ -1188,35 +1188,30 @@ mlx5_setup_tis(struct mlx5_dev_ctx_shared *sh) DRV_LOG(ERR, "Failed to query lag affinity."); return -1; } - if (sh->lag.affinity_mode == MLX5_LAG_MODE_TIS) { - for (i = 0; i < sh->bond.n_port; i++) { - tis_attr.lag_tx_port_affinity = - MLX5_IFC_LAG_MAP_TIS_AFFINITY(i, - sh->bond.n_port); - sh->tis[i] = mlx5_devx_cmd_create_tis(sh->cdev->ctx, - &tis_attr); - if (!sh->tis[i]) { - DRV_LOG(ERR, "Failed to TIS %d/%d for bonding device" - " %s.", i, sh->bond.n_port, - sh->ibdev_name); - return -1; - } - } + if (sh->lag.affinity_mode == MLX5_LAG_MODE_TIS) DRV_LOG(DEBUG, "LAG number of ports : %d, affinity_1 & 2 : pf%d & %d.\n", sh->bond.n_port, lag_ctx.tx_remap_affinity_1, lag_ctx.tx_remap_affinity_2); - return 0; - } - if (sh->lag.affinity_mode == MLX5_LAG_MODE_HASH) + else if (sh->lag.affinity_mode == MLX5_LAG_MODE_HASH) DRV_LOG(INFO, "Device %s enabled HW hash based LAG.", sh->ibdev_name); } - tis_attr.lag_tx_port_affinity = 0; - sh->tis[0] = mlx5_devx_cmd_create_tis(sh->cdev->ctx, &tis_attr); - if (!sh->tis[0]) { - DRV_LOG(ERR, "Failed to TIS 0 for bonding device" - " %s.", sh->ibdev_name); - return -1; + for (i = 0; i <= sh->bond.n_port; i++) { + /* + * lag_tx_port_affinity: 0 auto-selection, 1 PF1, 2 PF2 vice versa. + * Each TIS binds to one PF by setting lag_tx_port_affinity (> 0). + * Once LAG enabled, we create multiple TISs and bind each one to + * different PFs, then TIS[i+1] gets affinity i+1 and goes to PF i+1. + * TIS[0] is reserved for HW Hash mode. + */ + tis_attr.lag_tx_port_affinity = i; + sh->tis[i] = mlx5_devx_cmd_create_tis(sh->cdev->ctx, &tis_attr); + if (!sh->tis[i]) { + DRV_LOG(ERR, "Failed to create TIS %d/%d for [bonding] device" + " %s.", i, sh->bond.n_port, + sh->ibdev_name); + return -1; + } } return 0; } diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c index f6e1943fd7..6da6e9c2ee 100644 --- a/drivers/net/mlx5/mlx5_devx.c +++ b/drivers/net/mlx5/mlx5_devx.c @@ -1191,16 +1191,21 @@ mlx5_get_txq_tis_num(struct rte_eth_dev *dev, uint16_t queue_idx) { struct mlx5_priv *priv = dev->data->dev_private; int tis_idx; + struct mlx5_txq_data *txq_data = (*priv->txqs)[queue_idx]; - if (priv->sh->bond.n_port && priv->sh->lag.affinity_mode == - MLX5_LAG_MODE_TIS) { - tis_idx = (priv->lag_affinity_idx + queue_idx) % - priv->sh->bond.n_port; - DRV_LOG(INFO, "port %d txq %d gets affinity %d and maps to PF %d.", - dev->data->port_id, queue_idx, tis_idx + 1, - priv->sh->lag.tx_remap_affinity[tis_idx]); + if (txq_data->tx_affinity) { + tis_idx = txq_data->tx_affinity; } else { - tis_idx = 0; + if (priv->sh->bond.n_port && priv->sh->lag.affinity_mode == + MLX5_LAG_MODE_TIS) { + tis_idx = (priv->lag_affinity_idx + queue_idx) % + priv->sh->bond.n_port + 1; + DRV_LOG(INFO, "port %d txq %d gets affinity %d and maps to PF %d.", + dev->data->port_id, queue_idx, tis_idx, + priv->sh->lag.tx_remap_affinity[tis_idx - 1]); + } else { + tis_idx = 0; + } } MLX5_ASSERT(priv->sh->tis[tis_idx]); return priv->sh->tis[tis_idx]->id; diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h index a44050a1ce..394e9b8d4f 100644 --- a/drivers/net/mlx5/mlx5_tx.h +++ b/drivers/net/mlx5/mlx5_tx.h @@ -144,6 +144,7 @@ struct mlx5_txq_data { uint16_t inlen_send; /* Ordinary send data inline size. */ uint16_t inlen_empw; /* eMPW max packet size to inline. */ uint16_t inlen_mode; /* Minimal data length to inline. */ + uint8_t tx_affinity; /* TXQ affinity configuration. */ uint32_t qp_num_8s; /* QP number shifted by 8. */ uint64_t offloads; /* Offloads for Tx Queue. */ struct mlx5_mr_ctrl mr_ctrl; /* MR control descriptor. */ diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c index 7ef7c5f43e..b96a45060f 100644 --- a/drivers/net/mlx5/mlx5_txq.c +++ b/drivers/net/mlx5/mlx5_txq.c @@ -392,9 +392,17 @@ mlx5_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, container_of(txq, struct mlx5_txq_ctrl, txq); int res; + if (conf->tx_affinity > priv->num_lag_ports) { + rte_errno = EINVAL; + DRV_LOG(ERR, "port %u unable to setup Tx queue index %u" + " affinity is %u exceed the maximum %u", dev->data->port_id, + idx, conf->tx_affinity, priv->num_lag_ports); + return -rte_errno; + } res = mlx5_tx_queue_pre_setup(dev, idx, &desc); if (res) return res; + txq_ctrl = mlx5_txq_new(dev, idx, desc, socket, conf); if (!txq_ctrl) { DRV_LOG(ERR, "port %u unable to allocate queue index %u", @@ -1095,6 +1103,7 @@ mlx5_txq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, tmpl->txq.elts_m = desc - 1; tmpl->txq.port_id = dev->data->port_id; tmpl->txq.idx = idx; + tmpl->txq.tx_affinity = conf->tx_affinity; txq_set_params(tmpl); if (txq_adjust_params(tmpl)) goto error; -- 2.18.1