DPDK patches and discussions
 help / color / mirror / Atom feed
From: Thomas Monjalon <thomas@monjalon.net>
To: Alexander Kozyrev <akozyrev@nvidia.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
	Slava Ovsiienko <viacheslavo@nvidia.com>,
	Ori Kam <orika@nvidia.com>,
	"ferruh.yigit@intel.com" <ferruh.yigit@intel.com>,
	"andrew.rybchenko@oktetlabs.ru" <andrew.rybchenko@oktetlabs.ru>,
	"ajit.khaparde@broadcom.com" <ajit.khaparde@broadcom.com>,
	"jerinj@marvell.com" <jerinj@marvell.com>
Subject: Re: [dpdk-dev] [RFC] ethdev: introduce copy_field rte flow action
Date: Thu, 07 Jan 2021 16:17:52 +0100	[thread overview]
Message-ID: <4077045.mHWWFMTU2l@thomas> (raw)
In-Reply-To: <BN7PR12MB27070C3068BAFBA3160C7490AFAF0@BN7PR12MB2707.namprd12.prod.outlook.com>

07/01/2021 16:10, Alexander Kozyrev:
> > > > Thursday, January 7, 2021 10:07, Thomas Monjalon <thomas@monjalon.net>
> > > > > RTE Flows API lacks the ability to save an arbitrary header field in
> > > > > order to use it later for advanced packet manipulations. Examples
> > > > > include the usage of VxLAN ID after the packet is decapsulated or
> > > > > storing this ID inside the packet payload itself or swapping an
> > > > > arbitrary inner and outer packet fields.
> > > > >
> > > > > The idea is to allow a copy of a specified number of bits form any
> > > > > packet header field into another header field:
> > > > > RTE_FLOW_ACTION_TYPE_COPY_FIELD with the structure defined below.
> > > > >
> > > > > struct rte_flow_action_copy_field {
> > > > > 	struct rte_flow_action_copy_data dest;
> > > > > 	struct rte_flow_action_copy_data src;
> > > > > 	uint16_t width;
> > > > > };
> > > > >
> > > > > Arbitrary header field (as well as mark, metadata or tag values) can be
> > > > > used as both source and destination fields. This way we can save an
> > > > > arbitrary header field by copying its value to a tag/mark/metadata or
> > > > > copy it into another header field directly. tag/mark/metadata can also
> > > > > be used as a value to be stored in an arbitrary packet header field.
> > > > >
> > > > > struct rte_flow_action_copy_data {
> > > > > 	enum rte_flow_field_id field;
> > > > > 	uint16_t index;
> > > > > 	uint16_t offset;
> > > > > };
> > > > >
> > > > > The rte_flow_field_id specifies the particular packet field (or
> > > > > tag/mark/metadata) to be used as a copy source or destination.
> > > > > The index gives access to inner packet headers or elements in the tags
> > > > > array. The offset allows to copy a packet field value into the payload.
> > > >
> > > > So index is in reality the layer? How is it numbered exactly?
> > >
> > > It is a layer for packet fields, inner headers get higher number index.
> > > But is it also an index in the TAG array, so the name comes from it.
> > 
> > Sorry it is not obvious.
> > Please describe the exact numbering in tunnel and VLAN cases.
> > 
> > > > What is the field id if an offset is given?
> > >
> > > Field ID stays the same, you can specify a small offset to copy just a few bits
> > > from the entire packet field or a big offset to move to completely different
> > area.
> > 
> > I don't understand what is an offset then.
> > Isn't it the byte or bit where the copy start?
> > Do you handle sizes smaller than a byte?
> 
> It is the bit offset, you can copy 20 bits out of 32 bits of IPv4 address for example.

Now I'm confused.
You mean rte_flow_action_copy_data.offset is a bit offset?

> > > > Can we say that a field id can always be replaced by an offset?
> > >
> > > Not really. You can use offset to jump around packet fields for sure, but it is
> > going to be
> > > hard and cumbersome to calculate all the offsets for that. Field ID is much
> > more convenient.
> > 
> > I think it depends for who.
> > For some use cases, it may be easier to pass an offset.
> > For some drivers, it may be more efficient to directly manage offsets.
> 
> It is possible with this RFC, driver can choose what to use: id and/or offset.

We can set field and index to 0, and use only offset?
Then it is a byte offset from the beginning mbuf.data?






  reply	other threads:[~2021-01-07 15:17 UTC|newest]

Thread overview: 16+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-18  1:31 Alexander Kozyrev
2021-01-05 22:12 ` Alexander Kozyrev
2021-01-05 22:18   ` Thomas Monjalon
2021-01-05 22:16 ` Thomas Monjalon
2021-01-07 14:17   ` Alexander Kozyrev
2021-01-07 15:06     ` Thomas Monjalon
2021-01-07 15:10       ` Alexander Kozyrev
2021-01-07 15:17         ` Thomas Monjalon [this message]
2021-01-07 15:22           ` Alexander Kozyrev
2021-01-07 16:54             ` Thomas Monjalon
2021-01-07 16:57               ` Alexander Kozyrev
2021-01-07 17:05                 ` Thomas Monjalon
2021-01-07 20:14                   ` Alexander Kozyrev
2021-01-07 20:21                     ` Thomas Monjalon
2021-01-08 12:16                   ` Slava Ovsiienko
2021-01-10  6:50                     ` Ori Kam

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4077045.mHWWFMTU2l@thomas \
    --to=thomas@monjalon.net \
    --cc=ajit.khaparde@broadcom.com \
    --cc=akozyrev@nvidia.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jerinj@marvell.com \
    --cc=orika@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).