From: "Jiawei(Jonny) Wang" <jiaweiw@nvidia.com>
To: Ferruh Yigit <ferruh.yigit@amd.com>,
Slava Ovsiienko <viacheslavo@nvidia.com>,
Ori Kam <orika@nvidia.com>,
"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
Aman Singh <aman.deep.singh@intel.com>,
Yuying Zhang <yuying.zhang@intel.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue API
Date: Fri, 10 Feb 2023 14:06:11 +0000 [thread overview]
Message-ID: <PH0PR12MB5451AC724695D9B36F09C7B4C6DE9@PH0PR12MB5451.namprd12.prod.outlook.com> (raw)
In-Reply-To: <562e77e6-ea8a-7a56-6fbd-8aba70b295d4@amd.com>
Hi,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@amd.com>
> Sent: Friday, February 10, 2023 3:45 AM
> To: Jiawei(Jonny) Wang <jiaweiw@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Ori Kam <orika@nvidia.com>; NBU-Contact-
> Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
> andrew.rybchenko@oktetlabs.ru; Aman Singh <aman.deep.singh@intel.com>;
> Yuying Zhang <yuying.zhang@intel.com>
> Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>
> Subject: Re: [PATCH v4 1/2] ethdev: introduce the PHY affinity field in Tx queue
> API
>
> On 2/3/2023 1:33 PM, Jiawei Wang wrote:
> > When multiple physical ports are connected to a single DPDK port,
> > (example: kernel bonding, DPDK bonding, failsafe, etc.), we want to
> > know which physical port is used for Rx and Tx.
> >
>
> I assume "kernel bonding" is out of context, but this patch concerns DPDK
> bonding, failsafe or softnic. (I will refer them as virtual bonding
> device.)
>
''kernel bonding'' can be thought as Linux bonding.
> To use specific queues of the virtual bonding device may interfere with the
> logic of these devices, like bonding modes or RSS of the underlying devices. I
> can see feature focuses on a very specific use case, but not sure if all possible
> side effects taken into consideration.
>
>
> And although the feature is only relavent to virtual bondiong device, core
> ethdev structures are updated for this. Most use cases won't need these, so is
> there a way to reduce the scope of the changes to virtual bonding devices?
>
>
> There are a few very core ethdev APIs, like:
> rte_eth_dev_configure()
> rte_eth_tx_queue_setup()
> rte_eth_rx_queue_setup()
> rte_eth_dev_start()
> rte_eth_dev_info_get()
>
> Almost every user of ehtdev uses these APIs, since these are so fundemental I
> am for being a little more conservative on these APIs.
>
> Every eccentric features are targetting these APIs first because they are
> common and extending them gives an easy solution, but in long run making
> these APIs more complex, harder to maintain and harder for PMDs to support
> them correctly. So I am for not updating them unless it is a generic use case.
>
>
> Also as we talked about PMDs supporting them, I assume your coming PMD
> patch will be implementing 'tx_phy_affinity' config option only for mlx drivers.
> What will happen for other NICs? Will they silently ignore the config option
> from user? So this is a problem for the DPDK application portabiltiy.
>
Yes, the PMD patch is for net/mlx5 only, the 'tx_phy_affinity' can be used for HW to
choose an mapping queue with physical port.
Other NICs ignore this new configuration for now, or we should add checking in queue setup?
>
>
> As far as I understand target is application controlling which sub-device is used
> under the virtual bonding device, can you pleaes give more information why
> this is required, perhaps it can help to provide a better/different solution.
> Like adding the ability to use both bonding device and sub-device for data path,
> this way application can use whichever it wants. (this is just first solution I
> come with, I am not suggesting as replacement solution, but if you can describe
> the problem more I am sure other people can come with better solutions.)
>
For example:
There're two physical ports (assume device interface: eth2, eth3), and bonded these two
Devices into one interface (assume bond0).
DPDK application probed/attached the bond0 only (dpdk port id:0), while sending traffic from dpdk port,
We want to know the packet be sent into which physical port (eth2 or eth3).
With the new configuration, the queue could be configured with underlay device,
Then DPDK application could send the traffic into correct queue as desired.
Add all devices into DPDK, means that need to create multiple RX/TX Queue resources on it.
> And isn't this against the applicatio transparent to underneath device being
> bonding device or actual device?
>
>
> > This patch maps a DPDK Tx queue with a physical port, by adding
> > tx_phy_affinity setting in Tx queue.
> > The affinity number is the physical port ID where packets will be
> > sent.
> > Value 0 means no affinity and traffic could be routed to any connected
> > physical ports, this is the default current behavior.
> >
> > The number of physical ports is reported with rte_eth_dev_info_get().
> >
> > The new tx_phy_affinity field is added into the padding hole of
> > rte_eth_txconf structure, the size of rte_eth_txconf keeps the same.
> > An ABI check rule needs to be added to avoid false warning.
> >
> > Add the testpmd command line:
> > testpmd> port config (port_id) txq (queue_id) phy_affinity (value)
> >
> > For example, there're two physical ports connected to a single DPDK
> > port (port id 0), and phy_affinity 1 stood for the first physical port
> > and phy_affinity 2 stood for the second physical port.
> > Use the below commands to config tx phy affinity for per Tx Queue:
> > port config 0 txq 0 phy_affinity 1
> > port config 0 txq 1 phy_affinity 1
> > port config 0 txq 2 phy_affinity 2
> > port config 0 txq 3 phy_affinity 2
> >
> > These commands config the Tx Queue index 0 and Tx Queue index 1 with
> > phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these
> > packets will be sent from the first physical port, and similar with
> > the second physical port if sending packets with Tx Queue 2 or Tx
> > Queue 3.
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> > ---
snip
next prev parent reply other threads:[~2023-02-10 14:06 UTC|newest]
Thread overview: 40+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <20221221102934.13822-1-jiaweiw@nvidia.com/>
2023-02-03 5:07 ` [PATCH v3 0/2] add new PHY affinity in the flow item and " Jiawei Wang
2023-02-03 5:07 ` [PATCH v3 1/2] ethdev: introduce the PHY affinity field in " Jiawei Wang
2023-02-03 5:07 ` [PATCH v3 2/2] ethdev: add PHY affinity match item Jiawei Wang
2023-02-03 13:33 ` [PATCH v4 0/2] add new PHY affinity in the flow item and Tx queue API Jiawei Wang
2023-02-03 13:33 ` [PATCH v4 1/2] ethdev: introduce the PHY affinity field in " Jiawei Wang
2023-02-06 15:29 ` Jiawei(Jonny) Wang
2023-02-07 9:40 ` Ori Kam
2023-02-09 19:44 ` Ferruh Yigit
2023-02-10 14:06 ` Jiawei(Jonny) Wang [this message]
2023-02-14 9:38 ` Jiawei(Jonny) Wang
2023-02-14 10:01 ` Ferruh Yigit
2023-02-03 13:33 ` [PATCH v4 2/2] ethdev: add PHY affinity match item Jiawei Wang
2023-02-14 15:48 ` [PATCH v5 0/2] Added support for Tx queue mapping with an aggregated port Jiawei Wang
2023-02-14 15:48 ` [PATCH v5 1/2] ethdev: introduce the Tx map API for aggregated ports Jiawei Wang
2023-02-15 11:41 ` Jiawei(Jonny) Wang
2023-02-16 17:42 ` Thomas Monjalon
2023-02-17 6:45 ` Jiawei(Jonny) Wang
2023-02-16 17:58 ` Ferruh Yigit
2023-02-17 6:44 ` Jiawei(Jonny) Wang
2023-02-17 8:24 ` Andrew Rybchenko
2023-02-17 9:50 ` Jiawei(Jonny) Wang
2023-02-14 15:48 ` [PATCH v5 2/2] ethdev: add Aggregated affinity match item Jiawei Wang
2023-02-16 17:46 ` Thomas Monjalon
2023-02-17 6:45 ` Jiawei(Jonny) Wang
2023-02-17 10:50 ` [PATCH v6 0/2] Add Tx queue mapping of aggregated ports Jiawei Wang
2023-02-17 10:50 ` [PATCH v6 1/2] ethdev: add " Jiawei Wang
2023-02-17 12:53 ` Ferruh Yigit
2023-02-17 12:56 ` Andrew Rybchenko
2023-02-17 12:59 ` Ferruh Yigit
2023-02-17 13:05 ` Jiawei(Jonny) Wang
2023-02-17 13:41 ` Jiawei(Jonny) Wang
2023-02-17 15:03 ` Andrew Rybchenko
2023-02-17 15:32 ` Ferruh Yigit
2023-02-17 10:50 ` [PATCH v6 2/2] ethdev: add flow matching of aggregated port Jiawei Wang
2023-02-17 13:01 ` [PATCH v6 0/2] Add Tx queue mapping of aggregated ports Ferruh Yigit
2023-02-17 13:07 ` Jiawei(Jonny) Wang
2023-02-17 15:47 ` [PATCH v7 " Jiawei Wang
2023-02-17 15:47 ` [PATCH v7 1/2] ethdev: add " Jiawei Wang
2023-02-17 15:47 ` [PATCH v7 2/2] ethdev: add flow matching of aggregated port Jiawei Wang
2023-02-17 16:45 ` [PATCH v7 0/2] Add Tx queue mapping of aggregated ports Ferruh Yigit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=PH0PR12MB5451AC724695D9B36F09C7B4C6DE9@PH0PR12MB5451.namprd12.prod.outlook.com \
--to=jiaweiw@nvidia.com \
--cc=aman.deep.singh@intel.com \
--cc=andrew.rybchenko@oktetlabs.ru \
--cc=dev@dpdk.org \
--cc=ferruh.yigit@amd.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=thomas@monjalon.net \
--cc=viacheslavo@nvidia.com \
--cc=yuying.zhang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).