DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Jiawei(Jonny) Wang" <jiaweiw@nvidia.com>
To: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>,
	Slava Ovsiienko <viacheslavo@nvidia.com>,
	Ori Kam <orika@nvidia.com>,
	"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	Aman Singh <aman.deep.singh@intel.com>,
	Yuying Zhang <yuying.zhang@intel.com>,
	Ferruh Yigit <ferruh.yigit@amd.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: [PATCH v2 1/2] ethdev: add PHY affinity match item
Date: Wed, 1 Feb 2023 14:59:46 +0000	[thread overview]
Message-ID: <PH0PR12MB54515AE0B5DFD5391A18A4A1C6D19@PH0PR12MB5451.namprd12.prod.outlook.com> (raw)
In-Reply-To: <0377ab58-b69e-e327-e2fc-5b96febdedaa@oktetlabs.ru>

Hi,

> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Wednesday, February 1, 2023 4:50 PM
> On 1/30/23 20:00, Jiawei Wang wrote:
> > For the multiple hardware ports connect to a single DPDK port
> > (mhpsdp),
> 
> Sorry, what is mhpsdp?
> 

(m)ultiple (h)ardware (p)orts (s)ingle (D)PDK (p)ort.
It's short name for "multiple hardware ports connect to a single DPDK port".

> > currently, there is no information to indicate the packet belongs to
> > which hardware port.
> >
> > This patch introduces a new phy affinity item in rte flow API, and
> 
> "This patch introduces ..." -> "Introduce ..."
> rte -> RTE
> 

OK.
> > the phy affinity value reflects the physical port of the received packets.
> >
> > While uses the phy affinity as a matching item in the flow, and sets
> > the same phy_affinity value on the tx queue, then the packet can be
> > sent from
> 
> tx -> Tx
> 

OK.
> > the same hardware port with received.
> >
> > This patch also adds the testpmd command line to match the new item:
> > 	flow create 0 ingress group 0 pattern phy_affinity affinity is 1 /
> > 	end actions queue index 0 / end
> >
> > The above command means that creates a flow on a single DPDK port and
> > matches the packet from the first physical port (assume the phy
> > affinity 1
> 
> Why is it numbered from 1, not 0? Anyway it should be defined in the
> documentation below.
> 

While uses the phy affinity as a matching item in the flow, and sets the
same phy_affinity value on the tx queue, then the packet can be sent from
the same hardware port with received. 

So, if the Phy affinity 0 is no affinity then the first value should be 1.


> > stands for the first port) and redirects these packets into RxQ 0.
> >
> > Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
> 
> [snip]
> 
> > diff --git a/doc/guides/rel_notes/release_23_03.rst
> > b/doc/guides/rel_notes/release_23_03.rst
> > index c15f6fbb9f..a1abd67771 100644
> > --- a/doc/guides/rel_notes/release_23_03.rst
> > +++ b/doc/guides/rel_notes/release_23_03.rst
> > @@ -69,6 +69,11 @@ New Features
> >       ``rte_event_dev_config::nb_single_link_event_port_queues`` parameter
> >       required for eth_rx, eth_tx, crypto and timer eventdev adapters.
> >
> > +* **Added rte_flow support for matching PHY Affinity fields.**
> 
> Why "Affinity", not "affinity"?
> 

correct, will update.
> > +
> > +  For the multiple hardware ports connect to a single DPDK port
> > + (mhpsdp),  Added ``phy_affinity`` item in rte_flow to support
> > + physical affinity of  the packets.
> 
> Please, add one more empty line to have two before the next section.
> 
OK.
> >
> >   Removed Items
> >   -------------
> 
> [snip]
> 
> > diff --git a/lib/ethdev/rte_flow.h b/lib/ethdev/rte_flow.h index
> > b60987db4b..56c04ea37c 100644
> > --- a/lib/ethdev/rte_flow.h
> > +++ b/lib/ethdev/rte_flow.h
> 
> > @@ -2103,6 +2110,27 @@ static const struct rte_flow_item_meter_color
> rte_flow_item_meter_color_mask = {
> >   };
> >   #endif
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice
> > + *
> > + * RTE_FLOW_ITEM_TYPE_PHY_AFFINITY
> > + *
> > + * For the multiple hardware ports connect to a single DPDK port
> > +(mhpsdp),
> > + * use this item to match the physical affinity of the packets.
> > + */
> > +struct rte_flow_item_phy_affinity {
> > +	uint8_t affinity; /**< physical affinity value. */
> 
> Sorry, I'd like to know how application should find out which values may be
> used here? How many physical ports are behind this one DPDK ethdev?
> 

Like Linux bonding scenario, multiple physical port (for example PF1, PF2) can add into bond port as slave role, 
dpdk only probe and attach the bond master port (bond0), so total two phy affinity values. 

PMD can define the phy affinity and mapping the physical port, Or I can document the numbering in RTE level.

> Also, please, define which value should be used for the first port 0 or 1. I'd vote
> for 0.

If need to define the affinity numbering, 
I prefer to use 1 for first port, 0 for reserve and can keep the same value as tx side (second patch introduces the tx_phy_affinity).





  reply	other threads:[~2023-02-01 14:59 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <http://patches.dpdk.org/project/dpdk/cover/20221221102934.13822-1-jiaweiw@nvidia.com/>
2023-01-30 17:00 ` [PATCH v2 0/2] add new PHY affinity in the flow item and Tx queue API Jiawei Wang
2023-01-30 17:00   ` [PATCH v2 1/2] ethdev: add PHY affinity match item Jiawei Wang
2023-01-31 14:36     ` Ori Kam
2023-02-01  8:50     ` Andrew Rybchenko
2023-02-01 14:59       ` Jiawei(Jonny) Wang [this message]
2023-01-30 17:00   ` [PATCH v2 2/2] ethdev: introduce the PHY affinity field in Tx queue API Jiawei Wang
2023-01-31 17:26     ` Thomas Monjalon
2023-02-01  9:45       ` Jiawei(Jonny) Wang
2023-02-01  9:05     ` Andrew Rybchenko
2023-02-01 15:50       ` Jiawei(Jonny) Wang
2023-02-02  9:28         ` Andrew Rybchenko
2023-02-02 14:43           ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=PH0PR12MB54515AE0B5DFD5391A18A4A1C6D19@PH0PR12MB5451.namprd12.prod.outlook.com \
    --to=jiaweiw@nvidia.com \
    --cc=aman.deep.singh@intel.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@amd.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    --cc=yuying.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).