From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B17DB41BB7; Fri, 3 Feb 2023 06:08:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9F06F41148; Fri, 3 Feb 2023 06:08:11 +0100 (CET) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2046.outbound.protection.outlook.com [40.107.220.46]) by mails.dpdk.org (Postfix) with ESMTP id C50014014F for ; Fri, 3 Feb 2023 06:08:09 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=kpxw3ol+v7jPh4SP9Z9t6RbfVS9ZNGt/qrmILfI+k/vjLyjTETdHjEvNK3WbWSbzyOUP8hm3GuiujHSLytZpUWM95SpUYRRKhoF9I3J9gR/gusyEajaHeqXQH7ZfZmXDyZD1qKY/XavxQmiz2ZabcTrJo++hiseKxop2h4R30QYFyhWylpD3r1tDAhStWM/am5PJ/BL8VmuQIleX7U0ZOTycaLS1y9QlpDEtVCmuGtYI7lsCJBcUwImy00nnxa6mTTUQEaPZDCKLgBQxOxQLt8qrG4R2sI07ApkFvaGA7N33SmX0e98/eF7/UYuyoCSRegBpq3k7zLVVgqtaAgrXFg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=eOxflaHG+cK2mLeg915HN+up9RDlQPJeOSmh1N9G3Fg=; b=eDdpUJAedyMSab0xLMVwDmYOMYyb7jYYJ8pO2YKSgL3iuqLTSU0PzW0Kaolcy6P4HQXubYfL2OnaR4hkI/gNkRBOcNDCPcX8dGhg8kI0dg6s6T9JwbWQFN7DtcjNhxY06pT5MU1aOP9AlzNEk0agmCq2xeNhc+1w7lrLphLHTjoT6O0I3IOLJkrH6QBaeCNgQj22LFt9eVjZlLG7OaAk/TbpWidqIs3fV5wNLrkh0kL3Q3k9H7b1keG7MVmo8Qwq3WvYEQL76TWwhM3NxQjI4dch0szcD2LQvkjn+Gc2Jxl8IEUgrTyfkDkTkrXQZbMPlWORsnnuGt8RG/EUUVx6DA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.117.160) smtp.rcpttodomain=monjalon.net smtp.mailfrom=nvidia.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=eOxflaHG+cK2mLeg915HN+up9RDlQPJeOSmh1N9G3Fg=; b=RmaVSofh4x9FFkVH+X2/qoEPwMpagrsTOD6Rfn1xq2HCoejf3IWsma0b5A9h0Nl+yBPzw1OdX+SAEZaZsndovNp8aWtg8bD2YXhB1ZdGOT14oAflfJWT+Ne2FQ/gudUa6ImW1oVgsHsptlqUFYeecD3OQWZ+4LaXzjlPpYt699FS+lnQV0SQ2kE0RluHs5QJoJopTPHioClycWyWb311rTvS+rM3GxkARHlAZYfd3XPs27/rutXAQEXAFRRZcux6T/hSXLylQTrQBoLosXV0e0AXKGIYO6Dx7HlDWdCHx1m/vXJncLzr/AQ4tVbsl/pNs9qFQfmalmK+miiJl8+7ZA== Received: from BN9PR03CA0476.namprd03.prod.outlook.com (2603:10b6:408:139::31) by DM4PR12MB5841.namprd12.prod.outlook.com (2603:10b6:8:64::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27; Fri, 3 Feb 2023 05:08:07 +0000 Received: from BN8NAM11FT004.eop-nam11.prod.protection.outlook.com (2603:10b6:408:139:cafe::a3) by BN9PR03CA0476.outlook.office365.com (2603:10b6:408:139::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.27 via Frontend Transport; Fri, 3 Feb 2023 05:08:07 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.117.160) smtp.mailfrom=nvidia.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.117.160 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.117.160; helo=mail.nvidia.com; pr=C Received: from mail.nvidia.com (216.228.117.160) by BN8NAM11FT004.mail.protection.outlook.com (10.13.176.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6064.28 via Frontend Transport; Fri, 3 Feb 2023 05:08:07 +0000 Received: from rnnvmail201.nvidia.com (10.129.68.8) by mail.nvidia.com (10.129.200.66) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 2 Feb 2023 21:07:48 -0800 Received: from nvidia.com (10.126.231.37) by rnnvmail201.nvidia.com (10.129.68.8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.986.36; Thu, 2 Feb 2023 21:07:45 -0800 From: Jiawei Wang To: , , , "Aman Singh" , Yuying Zhang , Ferruh Yigit , Andrew Rybchenko CC: , Subject: [PATCH v3 1/2] ethdev: introduce the PHY affinity field in Tx queue API Date: Fri, 3 Feb 2023 07:07:15 +0200 Message-ID: <20230203050717.46914-2-jiaweiw@nvidia.com> X-Mailer: git-send-email 2.18.1 In-Reply-To: <20230203050717.46914-1-jiaweiw@nvidia.com> References: <20221221102934.13822-1-jiaweiw@nvidia.com/> <20230203050717.46914-1-jiaweiw@nvidia.com> MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.126.231.37] X-ClientProxiedBy: rnnvmail202.nvidia.com (10.129.68.7) To rnnvmail201.nvidia.com (10.129.68.8) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN8NAM11FT004:EE_|DM4PR12MB5841:EE_ X-MS-Office365-Filtering-Correlation-Id: 24dcfcf1-fb92-4ddc-d042-08db05a4a31a X-LD-Processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 62FUXFC7bk6TYIjkxEcwSho1UUB1w5XNLDkEaNLBlE4XepWNk5wvBlCpNvZfUUWtWLMaMfudgdrkJlJ/PeX+zNAjG0SNSZlS3DI9vOJTatkphKKlKscPLXz6YlWcifU8KlEk2X9LDFAG1RB3KAZIdwh8Xh3DxuWe7u9JJsKy70vr4buNbHAawy/m40fjnsq1ro654kcj59QkOaIwxvA7+dILVD5PJgavaSNGAAYMjW8oLFGk2NfeOHzeLP1m43quD6HUhLw/0ACheFmnoetoPWpCTlHKBEkoscl13K2kb+Y16k72dZKLG/OP3dM8Cry0sc++GKNqUR5/qeQ/8PCkb4T7Z6kvd3YZO3B7qokRGoHqtkCMxDGRzsVraHwS3PoY2FTOTeOehM4ZRx01TiMhDqbPyQObm7svtW6v68ytsyeiEdivwcZYRWWe/f09OYnXTewG1Ip/rBrBUV0AlaioFLGVXWbyfz6h0KKztgT3+noNRVuTpem3y4tM+t4zjt0Z6XqemBJpC1Sbf821iKvJiwAULSlv8wYZLL/iUZRW10589Plbx5hmeboUdJBM4gYyW3KId0hYQ93WTokRyJ5Em2Glt+03GI+fr2pzUinqX5d6I0eXP9c3HLtMmqgu3UyEygEwXjbV5ZxXqR090qdrUBVG/hiO4IyILhvS2lP3bRSAyzf3JWNFqSX9WV6kIFYBsVu0UpnWn7qyosX/O19oSg== X-Forefront-Antispam-Report: CIP:216.228.117.160; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:dc6edge1.nvidia.com; CAT:NONE; SFS:(13230025)(4636009)(346002)(136003)(376002)(396003)(39860400002)(451199018)(40470700004)(36840700001)(46966006)(1076003)(26005)(336012)(70586007)(426003)(83380400001)(6286002)(70206006)(186003)(36756003)(40460700003)(86362001)(356005)(82310400005)(16526019)(55016003)(47076005)(36860700001)(41300700001)(40480700001)(8936002)(110136005)(7696005)(316002)(4326008)(6666004)(2616005)(8676002)(54906003)(107886003)(478600001)(7636003)(82740400003)(5660300002)(2906002); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Feb 2023 05:08:07.0401 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 24dcfcf1-fb92-4ddc-d042-08db05a4a31a X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.117.160]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: BN8NAM11FT004.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5841 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org For the multiple hardware ports connect to a single DPDK port (mhpsdp), the previous patch introduces the new rte flow item to match the phy affinity of the received packets. Add the tx_phy_affinity setting in Tx queue API, the affinity value reflects packets be sent to which hardware port. Value 0 is no affinity and traffic will be routed between different physical ports. Add the nb_phy_ports into device info and value greater than 0 mean that the number of physical ports connect to the DPDK port. Add the new tx_phy_affinity field into the padding hole of rte_eth_txconf structure, the size of rte_eth_txconf keeps the same. Adds a suppress type for structure change in the ABI check file. Add the testpmd command line: testpmd> port config (port_id) txq (queue_id) phy_affinity (value) For example, there're two hardware ports 0 and 1 connected to a single DPDK port (port id 0), and phy_affinity 1 stood for hardware port 0 and phy_affinity 2 stood for hardware port 1, used the below command to config tx phy affinity for per Tx Queue: port config 0 txq 0 phy_affinity 1 port config 0 txq 1 phy_affinity 1 port config 0 txq 2 phy_affinity 2 port config 0 txq 3 phy_affinity 2 These commands config the TxQ index 0 and TxQ index 1 with phy affinity 1, uses TxQ 0 or TxQ 1 send packets, these packets will be sent from the hardware port 0, and similar with hardware port 1 if sending packets with TxQ 2 or TxQ 3. Signed-off-by: Jiawei Wang --- app/test-pmd/cmdline.c | 100 ++++++++++++++++++++ app/test-pmd/config.c | 1 + devtools/libabigail.abignore | 5 + doc/guides/testpmd_app_ug/testpmd_funcs.rst | 13 +++ lib/ethdev/rte_ethdev.h | 13 ++- 5 files changed, 131 insertions(+), 1 deletion(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index b32dc8bfd4..3450b1be36 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -764,6 +764,10 @@ static void cmd_help_long_parsed(void *parsed_result, "port cleanup (port_id) txq (queue_id) (free_cnt)\n" " Cleanup txq mbufs for a specific Tx queue\n\n" + + "port config (port_id) txq (queue_id) phy_affinity (value)\n" + " Set the physical affinity value " + "on a specific Tx queue\n\n" ); } @@ -12621,6 +12625,101 @@ static cmdline_parse_inst_t cmd_show_port_flow_transfer_proxy = { } }; +/* *** configure port txq phy_affinity value *** */ +struct cmd_config_tx_phy_affinity { + cmdline_fixed_string_t port; + cmdline_fixed_string_t config; + portid_t portid; + cmdline_fixed_string_t txq; + uint16_t qid; + cmdline_fixed_string_t phy_affinity; + uint8_t value; +}; + +static void +cmd_config_tx_phy_affinity_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_config_tx_phy_affinity *res = parsed_result; + struct rte_eth_dev_info dev_info; + struct rte_port *port; + int ret; + + if (port_id_is_invalid(res->portid, ENABLED_WARN)) + return; + + if (res->portid == (portid_t)RTE_PORT_ALL) { + printf("Invalid port id\n"); + return; + } + + port = &ports[res->portid]; + + if (strcmp(res->txq, "txq")) { + printf("Unknown parameter\n"); + return; + } + if (tx_queue_id_is_invalid(res->qid)) + return; + + ret = eth_dev_info_get_print_err(res->portid, &dev_info); + if (ret != 0) + return; + + if (dev_info.nb_phy_ports == 0) { + printf("Number of physical ports is 0 which is invalid for PHY Affinity\n"); + return; + } + printf("The number of physical ports is %u\n", dev_info.nb_phy_ports); + if (dev_info.nb_phy_ports < res->value) { + printf("The PHY affinity value %u is Invalid, exceeds the " + "number of physical ports\n", res->value); + return; + } + port->txq[res->qid].conf.tx_phy_affinity = res->value; + + cmd_reconfig_device_queue(res->portid, 0, 1); +} + +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + port, "port"); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + config, "config"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + portid, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + txq, "txq"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + qid, RTE_UINT16); +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport = + TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity, + phy_affinity, "phy_affinity"); +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value = + TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity, + value, RTE_UINT8); + +static cmdline_parse_inst_t cmd_config_tx_phy_affinity = { + .f = cmd_config_tx_phy_affinity_parsed, + .data = (void *)0, + .help_str = "port config txq phy_affinity ", + .tokens = { + (void *)&cmd_config_tx_phy_affinity_port, + (void *)&cmd_config_tx_phy_affinity_config, + (void *)&cmd_config_tx_phy_affinity_portid, + (void *)&cmd_config_tx_phy_affinity_txq, + (void *)&cmd_config_tx_phy_affinity_qid, + (void *)&cmd_config_tx_phy_affinity_hwport, + (void *)&cmd_config_tx_phy_affinity_value, + NULL, + }, +}; + /* ******************************************************************************** */ /* list of instructions */ @@ -12851,6 +12950,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = { (cmdline_parse_inst_t *)&cmd_show_capability, (cmdline_parse_inst_t *)&cmd_set_flex_is_pattern, (cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern, + (cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity, NULL, }; diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index acccb6b035..b83fb17cfa 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -936,6 +936,7 @@ port_infos_display(portid_t port_id) printf("unknown\n"); break; } + printf("Current number of physical ports: %u\n", dev_info.nb_phy_ports); } void diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 7a93de3ba1..0f4b5ec74b 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -34,3 +34,8 @@ ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; Temporary exceptions till next major ABI version ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; + +; Ignore fields inserted in middle padding of rte_eth_txconf +[suppress_type] + name = rte_eth_txconf + has_data_member_inserted_between = {offset_of(tx_deferred_start), offset_of(offloads)} diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst index 0037506a79..856fb55005 100644 --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst @@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only on a specific Tx queue:: This command should be run when the port is stopped, or else it will fail. +config per queue Tx physical affinity +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Configure a per queue physical affinity value only on a specific Tx queue:: + + testpmd> port (port_id) txq (queue_id) phy_affinity (value) + +* ``phy_affinity``: reflects packet can be sent to which hardware port. + uses it on multiple hardware ports connect to + a single DPDK port (mhpsdp). + +This command should be run when the port is stopped, or else it will fail. + Config VXLAN Encap outer layers ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index c129ca1eaf..ecfa2c6781 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1138,6 +1138,16 @@ struct rte_eth_txconf { less free descriptors than this value. */ uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ + /** + * Physical affinity to be set. + * Value 0 is no affinity and traffic could be routed between different + * physical ports, if 0 is disabled then try to match on phy_affinity 0 will + * result in an error. + * + * Value starts from 1 means for specific phy affinity and uses 1 for + * the first physical port. + */ + uint8_t tx_phy_affinity; /** * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. * Only offloads set on tx_queue_offload_capa or tx_offload_capa @@ -1777,7 +1787,8 @@ struct rte_eth_dev_info { struct rte_eth_switch_info switch_info; /** Supported error handling mode. */ enum rte_eth_err_handle_mode err_handle_mode; - + uint8_t nb_phy_ports; + /** Number of physical ports to connect with single DPDK port. */ uint64_t reserved_64s[2]; /**< Reserved for future fields */ void *reserved_ptrs[2]; /**< Reserved for future fields */ }; -- 2.18.1