From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <dev-bounces@dpdk.org>
Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124])
	by inbox.dpdk.org (Postfix) with ESMTP id 131E641C38;
	Tue, 14 Feb 2023 11:01:43 +0100 (CET)
Received: from mails.dpdk.org (localhost [127.0.0.1])
	by mails.dpdk.org (Postfix) with ESMTP id E832A40ED9;
	Tue, 14 Feb 2023 11:01:42 +0100 (CET)
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on2056.outbound.protection.outlook.com [40.107.220.56])
 by mails.dpdk.org (Postfix) with ESMTP id 1418640E2D
 for <dev@dpdk.org>; Tue, 14 Feb 2023 11:01:41 +0100 (CET)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MQlfMxEa4206p8jYGpIaKyvJKdp0H99/oh4chJXEoYUP0MGlW4nK5UKbtaYyl74mjUtlbk0p4XnxmhpEt6ep0WGJNn0ncYIu5EYwCEvHfuJbCSLWh6oUgo8jd/ohupUKVQl1xdwmeURLVTnDpJUn+KRJJvCsmVrqRLfbgByI6JAfvqfhTrk2zU5SvsW4Kv8VxJ9fgEI2nc2bxRGR0pGaQkqL4McntH5Ch1+MQC+AbdQQJ5GB686Cek4bgSLru/7jUlqNlYfKghH34NOwCVRzHr00aGuSlR5yH/vKv/qPqr/6VuByqVK+zdWDRjIv0xTdhpc7wqqwAh/cN7uZGT+WDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=95q4XxIL3gEdY7NJhcfvvbVNcRoO/NV6XfGbvN7Sy0o=;
 b=QhWKTxLD118PfzbELjvidzz9H3qv7gORnsM0GPpiuMW6LD5jWWcPaADR74EMpWD1AsZqkDNgwcY/TlPzdBaRoWrvSrNoqj8AHuFtPZfjQbsEuzE77SK7scV5LtUNotHWzZ5IKvfH5eh9VY7wR1VvSaVjr46bfy1pIkKtWxR3M4DnFbuMiAhP5c8eChHg5qLZJhcrEliiwTfXB3dC9ASMq4uE5lnqtBVyYR1+Dx6s+/5PnW0DHc596BCeFGU+Pyswt1JkR8QKXcmeWBnn4KyzZoBoTTLF6/ZDqf6MNEeG5LHoOYx/txDD3CS7oemNiPYYB6zqUh0hD+/bDCzNg0QPuw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=95q4XxIL3gEdY7NJhcfvvbVNcRoO/NV6XfGbvN7Sy0o=;
 b=i/FEd1Qu7J6+HrZgnSpvl391gb3Fn8NjcDUzoVfK+/kSyNPM7Vff4wSQmED4fijAuTvQiYkNr7cbPaKCtj540mcsit37eSRn+Nt0LSpulJS9j0NIskQlZjt2S/zYyYw6sS4jFJ2oljNq76u+xaXveJLGWhqFC+rY00ZaJaGQ4Lo=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Received: from CH2PR12MB4294.namprd12.prod.outlook.com (2603:10b6:610:a9::11)
 by SA3PR12MB7974.namprd12.prod.outlook.com (2603:10b6:806:307::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6086.24; Tue, 14 Feb
 2023 10:01:38 +0000
Received: from CH2PR12MB4294.namprd12.prod.outlook.com
 ([fe80::3614:22ed:ed5:5b48]) by CH2PR12MB4294.namprd12.prod.outlook.com
 ([fe80::3614:22ed:ed5:5b48%8]) with mapi id 15.20.6086.026; Tue, 14 Feb 2023
 10:01:38 +0000
Message-ID: <8bf62f42-c40b-536b-1946-f1158dbb31b0@amd.com>
Date: Tue, 14 Feb 2023 10:01:31 +0000
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
 Thunderbird/102.7.2
Subject: Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx
 queue API
Content-Language: en-US
To: "Jiawei(Jonny) Wang" <jiaweiw@nvidia.com>,
 Slava Ovsiienko <viacheslavo@nvidia.com>, Ori Kam <orika@nvidia.com>,
 "NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
 "andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
 Aman Singh <aman.deep.singh@intel.com>, Yuying Zhang <yuying.zhang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Raslan Darawsheh <rasland@nvidia.com>
References: <20230203050717.46914-1-jiaweiw@nvidia.com>
 <20230203133339.30271-1-jiaweiw@nvidia.com>
 <20230203133339.30271-2-jiaweiw@nvidia.com>
 <562e77e6-ea8a-7a56-6fbd-8aba70b295d4@amd.com>
 <PH0PR12MB5451BED719E2C4C890EF96BEC6A29@PH0PR12MB5451.namprd12.prod.outlook.com>
From: Ferruh Yigit <ferruh.yigit@amd.com>
In-Reply-To: <PH0PR12MB5451BED719E2C4C890EF96BEC6A29@PH0PR12MB5451.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: LO6P123CA0024.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:313::16) To CH2PR12MB4294.namprd12.prod.outlook.com
 (2603:10b6:610:a9::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH2PR12MB4294:EE_|SA3PR12MB7974:EE_
X-MS-Office365-Filtering-Correlation-Id: 107833c0-ef85-4bfc-16a1-08db0e72764d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cufEkLybnVeCOV9+4haxAXIb8AYvJcTrjUgjvDwBUvDWxCLkbZnoOORulC85MSdUdIoBfavR0LqSdMncBJvuaacH/1of+1+w+7odBW9O1/TukV2IBMper7FAMmlsrV+BiF9ag7JaS8cqGcfFjaqorN6OPEYk0CbJ+VxTOR4HK8ckdIEvv3U7dhbWBXTUozKU7DyPYGAu5ywVzise6fn3U2UK5tKv6vWMNTN6A+LrgNbHqA0c/F4Mz48SAFXHUH8mHVr2F07ReUofdBGQEedsmZ3hr8OzBAkS7BFVS+RTPw8l7Z7p+htuo5FOEmCg/WDzb4PIicUQhNLOqxIddDsS/Z0zHn1QQ3K32E+YgCn65BsuaisaPMD4dRuAdGwhi0qhxiUv0GEM8yq4WKefER3tTPuIBHQ5+1smNxq+LNGmDTplgLTEb81+urF4+mDluac5du7HbOJANoVEjVYQuLLWcUMYwzyr4PcpVNpQ1Ldu0XEJRps89MF3/hC1ck7hDRwHgoP4yVAxku2TG5YYEWfZaeAJEgptj5I0O+88Rocb8X71CvTTgEBYHV/FM5d5GHwWuVJpUZd42b0jVExZqklxQEuQA5unYaZGiIjeB+7chavTEtLzdRIIwjOp/l+FWRWAFJBB8pmuhelABB37oioir3tPaKxE1zX+Pbh4/UqLeRMrl8VbrGH9SQu1Sbqtq6IV3kvcK0R+jyN+GnXwhurQVxakt1wBrNb8kaW9J2Nz7UM=
X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:CH2PR12MB4294.namprd12.prod.outlook.com; PTR:; CAT:NONE;
 SFS:(13230025)(4636009)(366004)(376002)(346002)(136003)(396003)(39860400002)(451199018)(186003)(26005)(31686004)(54906003)(83380400001)(86362001)(110136005)(36756003)(6486002)(316002)(31696002)(6512007)(6506007)(53546011)(6666004)(30864003)(5660300002)(44832011)(2906002)(478600001)(41300700001)(38100700002)(2616005)(8936002)(66556008)(8676002)(66476007)(66946007)(4326008)(43740500002)(45980500001);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?utf-8?B?MXRnZCs1bEZxb3hCQkVuR21LNWNGTTZGWW9qVEhaMXR0ZHRKL3E4aGlZTTFT?=
 =?utf-8?B?YlM4WHQwdlp4VkR1UDFmQjB0ZDJ2cGQvWFRyMTIwVk8xRitwZVV5UHNPVExP?=
 =?utf-8?B?V3BEMTkwbEZpdWg0UHBtaUhkSUNvLzllMENKUDJTTU9VK29aNzZ0L3ZEQjl4?=
 =?utf-8?B?ZHNzbG00WVZldnZwZktkUUtNRnpTdDFmcURXZHpha3ppOFpjdzdKYUREZlFi?=
 =?utf-8?B?VmluQTg3K1AyeFRZRERGaTU4cndlYURGa1BlUzR3WlhMOXZmYW8yQ3F2cXpK?=
 =?utf-8?B?alp0bmIxNlcvWE13RU1RUTljcUtUV1MxTWtBYWN1VFZCUThlaldLVUd5Y1NX?=
 =?utf-8?B?eHdQZ2lHR3JVdVBGNE5PRHE2SEl5ZTY4aDZZcXhwQnhaR0pZYkZReUhSUkQr?=
 =?utf-8?B?b2RSVmRlOXJNN1lhOWpFYmx2NHpXR1dLamZmVmhXQkhTamJKQmxpeGg3bzYr?=
 =?utf-8?B?TlFhWTViVS9HWVY2TWZybEMzaHV5eXMzSTVxdE5uZnh2REt3cmxiOTYzb3Zs?=
 =?utf-8?B?Wi9ScG1CSzVFVnFFcmhLQ2ErUUJOdnhiSUdRaHk1dVJlcElxK3ROdURuaVor?=
 =?utf-8?B?QmlIb2RTYUNuMzU3ZUtubVhUazk4RFY3VFlkbVNiR1ZsTFBsVjZwdDFMMlFO?=
 =?utf-8?B?cmZJLzBmSUkvOFlBMU1uMllOeGlacjErWWdLYWNySXdXUWxES211QU5TeHJW?=
 =?utf-8?B?blVISkVBZlRoS2NINmVhck1sYXNqbDlZQzlxVWN5SEhDUXBGRXV2MEFEbEJK?=
 =?utf-8?B?RUpHb1ZDSlB2TGdBNGlFK2pOaVlETUFuLzZsQ1hVeGhsK3VjTCtVWWdGVG1k?=
 =?utf-8?B?SWVqa3ZWWWUwV3AzMWNKOWh0VVd0bFFLQ2MvUzRwNkhaeldNNWh2TVFkVERX?=
 =?utf-8?B?bVlOLzFtQVJSRS9MOENHaGpGSFVUNXY1TVhZZ0wyemVQRXlMaVpSV0dsS0Y3?=
 =?utf-8?B?aFVjU1pWSEdHWHhUNWtyb0crTCtqSWlnZFdzcDYrZmxPd2ZtZkp3NTRpM2la?=
 =?utf-8?B?dkhRa3Vsc3UzTUJVQ2NMN1dSamI3U0VRSGwxWXFHZFhtbE9tVWVUYUdSK2U3?=
 =?utf-8?B?L2NEN2x0aUEyTzdtQjV1TlRVNzlaamJDdG41TzVDMHdXTGVzRVZVVGVQWEdM?=
 =?utf-8?B?dDkrQmZWeWJ5Z0pmUUpFNEVtbmNIUS80UEN6UGJmYjNnZVF0UFYxRDJnaEZE?=
 =?utf-8?B?Z0Zlamx1VGlWQWJYcmN1QndoZzRrd0Zic05lZWY5NUVCcksrS0YzVVNrby9m?=
 =?utf-8?B?QVRWeitVR2lEK3ZxWk9FNENFTXhTUTNXRHdaU2Uybkp4K0JZY0txQ0l2UlZw?=
 =?utf-8?B?YjhuWnh2TXN5MHB4TTdTYTIwM1p4RGgyU2VqNnRrVHNlUVR5Y1JnYVVhaU5t?=
 =?utf-8?B?S3lveVk0NWs4OTJWU2VtL2ozZEt0SkcwRW1ITjNNZVIzQkFsYWtvL0o0TE5w?=
 =?utf-8?B?TWFOUG8vNTBTTWpXS2ZhdUszVkJYNkkzU21CRDkybDliTlZ1WWVxMWF0NjZw?=
 =?utf-8?B?a3lrZ0FWZTJCQ3RacEtjM0ZkaE1FeGZySStHK3ZUK2Myek9LbUY4S1hvSUFz?=
 =?utf-8?B?c2R2VEorR2svTFJXVXN4b2k1NjNmRUpsek12RXlJYUxacHJ2M1BqeUJ4OXQ2?=
 =?utf-8?B?MExpNkZBeldjakgvTmFnbTZWOEhsbmJsOXZ5WjhMVWt1dldVZmNpN2RUbkhP?=
 =?utf-8?B?bVNKNlpSRDdKWDI0a0JhREk3MnhhaU5RSEhpNmZHd2FKQVhDaWtLeVJ2bkww?=
 =?utf-8?B?d081VitjdzdhaSthNWYrZG0xWDNvejF1TG1XZW14OEIvSkk0T1N2bXhqd2FQ?=
 =?utf-8?B?dlhPbXNDZGd3ZXh2VlBXNkdjbDNPeGpobHpwWnBadDBERnBoYW1lMDF3c3hT?=
 =?utf-8?B?bGFEOVV1cEUxeVBoVG8ra2pSdER3ZjFsUFlncG5Pbi9BNld6eHNGaG1CV3NU?=
 =?utf-8?B?Zk9SYVJ1WjJNV3FQUkkxS0ZiWGZVQnNNeWRsL2R0cS9BcUdvc05hWDFGVCs2?=
 =?utf-8?B?N0R4a0pyMlRudVVhUFBSc0x5REFONXFSWlJnbzM0dTVCS1RHbi92clk1OHM3?=
 =?utf-8?B?MWxkSnI0aU1OdURYeU9MdlAreG0rMENVQSs0S2xDYUNBM055TW9LVDZzVTBN?=
 =?utf-8?Q?nXTDEZ3d/zBMojEZ34HAx6RS5?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 107833c0-ef85-4bfc-16a1-08db0e72764d
X-MS-Exchange-CrossTenant-AuthSource: CH2PR12MB4294.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Feb 2023 10:01:37.9068 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ytcixFrjVhhhmp2kdVN4QM7AuN68gcbzqeYHzprOWMvsYhsH0AZlsF1OQM+yaJA+
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7974
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: DPDK patches and discussions <dev.dpdk.org>
List-Unsubscribe: <https://mails.dpdk.org/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://mails.dpdk.org/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <https://mails.dpdk.org/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
Errors-To: dev-bounces@dpdk.org

On 2/14/2023 9:38 AM, Jiawei(Jonny) Wang wrote:
> Hi,
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@amd.com>
>> Sent: Friday, February 10, 2023 3:45 AM
>> To: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>; Slava Ovsiienko
>> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
>> Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
>> andrew.rybchenko@oktetlabs.ru; Aman Singh <aman.deep.singh@intel.com>;
>> Yuying Zhang <yuying.zhang@intel.com>
>> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
>> Subject: Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue
>> API
>>
>> On 2/3/2023 1:33 PM, Jiawei Wang wrote:
>>> When multiple physical ports are connected to a single DPDK port,
>>> (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to
>>> know which physical port is used for Rx and Tx.
>>>
>>
>> I assume "kernel bonding" is out of context, but this patch concerns DPDK
>> bonding, failsafe or softnic. (I will refer them as virtual bonding
>> device.)
>>
>> To use specific queues of the virtual bonding device may interfere with the
>> logic of these devices, like bonding modes or RSS of the underlying devices. I
>> can see feature focuses on a very specific use case, but not sure if all possible
>> side effects taken into consideration.
>>
>>
>> And although the feature is only relavent to virtual bondiong device, core
>> ethdev structures are updated for this. Most use cases won't need these, so is
>> there a way to reduce the scope of the changes to virtual bonding devices?
>>
>>
>> There are a few very core ethdev APIs, like:
>> rte_eth_dev_configure()
>> rte_eth_tx_queue_setup()
>> rte_eth_rx_queue_setup()
>> rte_eth_dev_start()
>> rte_eth_dev_info_get()
>>
>> Almost every user of ehtdev uses these APIs, since these are so fundemental I
>> am for being a little more conservative on these APIs.
>>
>> Every eccentric features are targetting these APIs first because they are
>> common and extending them gives an easy solution, but in long run making
>> these APIs more complex, harder to maintain and harder for PMDs to support
>> them correctly. So I am for not updating them unless it is a generic use case.
>>
>>
>> Also as we talked about PMDs supporting them, I assume your coming PMD
>> patch will be implementing 'tx_phy_affinity' config option only for mlx drivers.
>> What will happen for other NICs? Will they silently ignore the config option
>> from user? So this is a problem for the DPDK application portabiltiy.
>>
>>
>>
>> As far as I understand target is application controlling which sub-device is used
>> under the virtual bonding device, can you pleaes give more information why
>> this is required, perhaps it can help to provide a better/different solution.
>> Like adding the ability to use both bonding device and sub-device for data path,
>> this way application can use whichever it wants. (this is just first solution I
>> come with, I am not suggesting as replacement solution, but if you can describe
>> the problem more I am sure other people can come with better solutions.)
>>
>> And isn't this against the applicatio transparent to underneath device being
>> bonding device or actual device?
>>
>>
> 
> OK, I will send the new version with separate functions in ethdev layer, 
> to support the Map a Tx queue to port and get the number of ports.
> And these functions work with device ops callback, other NICs will reported
> The unsupported the ops callback is NULL.
> 

OK, thanks Jonny, at least this separates the fetaure to its own APIs
which reduces the impact for applications and drivers that are not using
this feature.


>>> This patch maps a DPDK Tx queue with a physical port, by adding
>>> tx_phy_affinity setting in Tx queue.
>>> The affinity number is the physical port ID where packets will be
>>> sent.
>>> Value 0 means no affinity and traffic could be routed to any connected
>>> physical ports, this is the default current behavior.
>>>
>>> The number of physical ports is reported with rte_eth_dev_info_get().
>>>
>>> The new tx_phy_affinity field is added into the padding hole of
>>> rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
>>> An ABI check rule needs to be added to avoid false warning.
>>>
>>> Add the testpmd command line:
>>> testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
>>>
>>> For example, there're two physical ports connected to a single DPDK
>>> port (port id 0), and phy_affinity 1 stood for the first physical port
>>> and phy_affinity 2 stood for the second physical port.
>>> Use the below commands to config tx phy affinity for per Tx Queue:
>>>         port config 0 txq 0 phy_affinity 1
>>>         port config 0 txq 1 phy_affinity 1
>>>         port config 0 txq 2 phy_affinity 2
>>>         port config 0 txq 3 phy_affinity 2
>>>
>>> These commands config the Tx Queue index 0 and Tx Queue index 1 with
>>> phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these
>>> packets will be sent from the first physical port, and similar with
>>> the second physical port if sending packets with Tx Queue 2 or Tx
>>> Queue 3.
>>>
>>> Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
>>> ---
>>>  app/test-pmd/cmdline.c                      | 100 ++++++++++++++++++++
>>>  app/test-pmd/config.c                       |   1 +
>>>  devtools/libabigail.abignore                |   5 +
>>>  doc/guides/rel_notes/release_23_03.rst      |   4 +
>>>  doc/guides/testpmd_app_ug/testpmd_funcs.rst |  13 +++
>>>  lib/ethdev/rte_ethdev.h                     |  10 ++
>>>  6 files changed, 133 insertions(+)
>>>
>>> diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index
>>> cb8c174020..f771fcf8ac 100644
>>> --- a/app/test-pmd/cmdline.c
>>> +++ b/app/test-pmd/cmdline.c
>>> @@ -776,6 +776,10 @@ static void cmd_help_long_parsed(void
>>> *parsed_result,
>>>
>>>  			"port cleanup (port_id) txq (queue_id) (free_cnt)\n"
>>>  			"    Cleanup txq mbufs for a specific Tx queue\n\n"
>>> +
>>> +			"port config (port_id) txq (queue_id) phy_affinity
>> (value)\n"
>>> +			"    Set the physical affinity value "
>>> +			"on a specific Tx queue\n\n"
>>>  		);
>>>  	}
>>>
>>> @@ -12633,6 +12637,101 @@ static cmdline_parse_inst_t
>> cmd_show_port_flow_transfer_proxy = {
>>>  	}
>>>  };
>>>
>>> +/* *** configure port txq phy_affinity value *** */ struct
>>> +cmd_config_tx_phy_affinity {
>>> +	cmdline_fixed_string_t port;
>>> +	cmdline_fixed_string_t config;
>>> +	portid_t portid;
>>> +	cmdline_fixed_string_t txq;
>>> +	uint16_t qid;
>>> +	cmdline_fixed_string_t phy_affinity;
>>> +	uint8_t value;
>>> +};
>>> +
>>> +static void
>>> +cmd_config_tx_phy_affinity_parsed(void *parsed_result,
>>> +				  __rte_unused struct cmdline *cl,
>>> +				  __rte_unused void *data)
>>> +{
>>> +	struct cmd_config_tx_phy_affinity *res = parsed_result;
>>> +	struct rte_eth_dev_info dev_info;
>>> +	struct rte_port *port;
>>> +	int ret;
>>> +
>>> +	if (port_id_is_invalid(res->portid, ENABLED_WARN))
>>> +		return;
>>> +
>>> +	if (res->portid == (portid_t)RTE_PORT_ALL) {
>>> +		printf("Invalid port id\n");
>>> +		return;
>>> +	}
>>> +
>>> +	port = &ports[res->portid];
>>> +
>>> +	if (strcmp(res->txq, "txq")) {
>>> +		printf("Unknown parameter\n");
>>> +		return;
>>> +	}
>>> +	if (tx_queue_id_is_invalid(res->qid))
>>> +		return;
>>> +
>>> +	ret = eth_dev_info_get_print_err(res->portid, &dev_info);
>>> +	if (ret != 0)
>>> +		return;
>>> +
>>> +	if (dev_info.nb_phy_ports == 0) {
>>> +		printf("Number of physical ports is 0 which is invalid for PHY
>> Affinity\n");
>>> +		return;
>>> +	}
>>> +	printf("The number of physical ports is %u\n", dev_info.nb_phy_ports);
>>> +	if (dev_info.nb_phy_ports < res->value) {
>>> +		printf("The PHY affinity value %u is Invalid, exceeds the "
>>> +		       "number of physical ports\n", res->value);
>>> +		return;
>>> +	}
>>> +	port->txq[res->qid].conf.tx_phy_affinity = res->value;
>>> +
>>> +	cmd_reconfig_device_queue(res->portid, 0, 1); }
>>> +
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_port =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 port, "port");
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_config =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 config, "config");
>>> +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_portid =
>>> +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 portid, RTE_UINT16);
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_txq =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 txq, "txq");
>>> +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_qid =
>>> +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +			      qid, RTE_UINT16);
>>> +cmdline_parse_token_string_t cmd_config_tx_phy_affinity_hwport =
>>> +	TOKEN_STRING_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +				 phy_affinity, "phy_affinity");
>>> +cmdline_parse_token_num_t cmd_config_tx_phy_affinity_value =
>>> +	TOKEN_NUM_INITIALIZER(struct cmd_config_tx_phy_affinity,
>>> +			      value, RTE_UINT8);
>>> +
>>> +static cmdline_parse_inst_t cmd_config_tx_phy_affinity = {
>>> +	.f = cmd_config_tx_phy_affinity_parsed,
>>> +	.data = (void *)0,
>>> +	.help_str = "port config <port_id> txq <queue_id> phy_affinity <value>",
>>> +	.tokens = {
>>> +		(void *)&cmd_config_tx_phy_affinity_port,
>>> +		(void *)&cmd_config_tx_phy_affinity_config,
>>> +		(void *)&cmd_config_tx_phy_affinity_portid,
>>> +		(void *)&cmd_config_tx_phy_affinity_txq,
>>> +		(void *)&cmd_config_tx_phy_affinity_qid,
>>> +		(void *)&cmd_config_tx_phy_affinity_hwport,
>>> +		(void *)&cmd_config_tx_phy_affinity_value,
>>> +		NULL,
>>> +	},
>>> +};
>>> +
>>>  /*
>>>
>> ****************************************************************
>> ******
>>> ********** */
>>>
>>>  /* list of instructions */
>>> @@ -12866,6 +12965,7 @@ static cmdline_parse_ctx_t builtin_ctx[] = {
>>>  	(cmdline_parse_inst_t *)&cmd_show_port_cman_capa,
>>>  	(cmdline_parse_inst_t *)&cmd_show_port_cman_config,
>>>  	(cmdline_parse_inst_t *)&cmd_set_port_cman_config,
>>> +	(cmdline_parse_inst_t *)&cmd_config_tx_phy_affinity,
>>>  	NULL,
>>>  };
>>>
>>> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index
>>> acccb6b035..b83fb17cfa 100644
>>> --- a/app/test-pmd/config.c
>>> +++ b/app/test-pmd/config.c
>>> @@ -936,6 +936,7 @@ port_infos_display(portid_t port_id)
>>>  		printf("unknown\n");
>>>  		break;
>>>  	}
>>> +	printf("Current number of physical ports: %u\n",
>>> +dev_info.nb_phy_ports);
>>>  }
>>>
>>>  void
>>> diff --git a/devtools/libabigail.abignore
>>> b/devtools/libabigail.abignore index 7a93de3ba1..ac7d3fb2da 100644
>>> --- a/devtools/libabigail.abignore
>>> +++ b/devtools/libabigail.abignore
>>> @@ -34,3 +34,8 @@
>>>  ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>>>  ; Temporary exceptions till next major ABI version ;
>>> ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;
>>> +
>>> +; Ignore fields inserted in padding hole of rte_eth_txconf
>>> +[suppress_type]
>>> +        name = rte_eth_txconf
>>> +        has_data_member_inserted_between =
>>> +{offset_of(tx_deferred_start), offset_of(offloads)}
>>> diff --git a/doc/guides/rel_notes/release_23_03.rst
>>> b/doc/guides/rel_notes/release_23_03.rst
>>> index 73f5d94e14..e99bd2dcb6 100644
>>> --- a/doc/guides/rel_notes/release_23_03.rst
>>> +++ b/doc/guides/rel_notes/release_23_03.rst
>>> @@ -55,6 +55,10 @@ New Features
>>>       Also, make sure to start the actual text at the margin.
>>>       =======================================================
>>>
>>> +* **Added affinity for multiple physical ports connected to a single
>>> +DPDK port.**
>>> +
>>> +  * Added Tx affinity in queue setup to map a physical port.
>>> +
>>>  * **Updated AMD axgbe driver.**
>>>
>>>    * Added multi-process support.
>>> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> index 79a1fa9cb7..5c716f7679 100644
>>> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
>>> @@ -1605,6 +1605,19 @@ Enable or disable a per queue Tx offloading only
>> on a specific Tx queue::
>>>
>>>  This command should be run when the port is stopped, or else it will fail.
>>>
>>> +config per queue Tx physical affinity
>>> +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>> +
>>> +Configure a per queue physical affinity value only on a specific Tx queue::
>>> +
>>> +   testpmd> port (port_id) txq (queue_id) phy_affinity (value)
>>> +
>>> +* ``phy_affinity``: physical port to use for sending,
>>> +                    when multiple physical ports are connected to
>>> +                    a single DPDK port.
>>> +
>>> +This command should be run when the port is stopped, otherwise it fails.
>>> +
>>>  Config VXLAN Encap outer layers
>>>  ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>>
>>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
>>> c129ca1eaf..2fd971b7b5 100644
>>> --- a/lib/ethdev/rte_ethdev.h
>>> +++ b/lib/ethdev/rte_ethdev.h
>>> @@ -1138,6 +1138,14 @@ struct rte_eth_txconf {
>>>  				      less free descriptors than this value. */
>>>
>>>  	uint8_t tx_deferred_start; /**< Do not start queue with
>>> rte_eth_dev_start(). */
>>> +	/**
>>> +	 * Affinity with one of the multiple physical ports connected to the
>> DPDK port.
>>> +	 * Value 0 means no affinity and traffic could be routed to any
>> connected
>>> +	 * physical port.
>>> +	 * The first physical port is number 1 and so on.
>>> +	 * Number of physical ports is reported by nb_phy_ports in
>> rte_eth_dev_info.
>>> +	 */
>>> +	uint8_t tx_phy_affinity;
>>>  	/**
>>>  	 * Per-queue Tx offloads to be set  using RTE_ETH_TX_OFFLOAD_*
>> flags.
>>>  	 * Only offloads set on tx_queue_offload_capa or tx_offload_capa @@
>>> -1744,6 +1752,8 @@ struct rte_eth_dev_info {
>>>  	/** Device redirection table size, the total number of entries. */
>>>  	uint16_t reta_size;
>>>  	uint8_t hash_key_size; /**< Hash key size in bytes */
>>> +	/** Number of physical ports connected with DPDK port. */
>>> +	uint8_t nb_phy_ports;
>>>  	/** Bit mask of RSS offloads, the bit offset also means flow type */
>>>  	uint64_t flow_type_rss_offloads;
>>>  	struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration
>>> */
>