* [dpdk-dev] [PATCH] ethdev: add packet integrity checks
@ 2021-04-05 18:04 Ori Kam
2021-04-06 7:39 ` Jerin Jacob
` (2 more replies)
0 siblings, 3 replies; 68+ messages in thread
From: Ori Kam @ 2021-04-05 18:04 UTC (permalink / raw)
To: ajit.khaparde, andrew.rybchenko, ferruh.yigit, thomas
Cc: orika, dev, jerinj, olivier.matz, viacheslavo
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) a, and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currenlty rte_flow works in positive way the assumtion is
that the postive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to considure the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexability.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check flor layer 2 have passed.
3. l3_ok - all check flor layer 2 have passed. If packet doens't have
l3 layer this check shoudl fail.
4. l4_ok - all check flor layer 2 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
be 0 and the l3_ok will be 0.
6. ipv4_csum_ok - IPv4 checksum is O.K.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
packet len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
lib/librte_ethdev/rte_flow.h | 46 ++++++++++++++++++++++++++++++++++++++
2 files changed, 65 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index aec2ba1..58b116e 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,25 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 3 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
+
Actions
~~~~~~~
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 6cc5713..f6888a1 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ *
+ * See struct rte_flow_item_packet_integrity_checks.
+ */
+ RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
};
/**
@@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
};
#endif
+struct rte_flow_item_packet_integrity_checks {
+ uint32_t level;
+ /**< Packet encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+RTE_STD_C11
+ union {
+ struct {
+ uint64_t packet_ok:1;
+ /** The packet is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l2_crc_ok:1;
+ /**< L2 layer checksum is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< L3 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l3_len_ok:1;
+ /**< The l3 len is smaller than the packet len. */
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_sanity_checks
+ rte_flow_item_sanity_checks_mask = {
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
1.8.3.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-05 18:04 [dpdk-dev] [PATCH] ethdev: add packet integrity checks Ori Kam
@ 2021-04-06 7:39 ` Jerin Jacob
2021-04-07 10:32 ` Ori Kam
2021-04-08 8:04 ` [dpdk-dev] [PATCH] ethdev: add packet integrity checks Andrew Rybchenko
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 0/2] " Gregory Etelson
2 siblings, 1 reply; 68+ messages in thread
From: Jerin Jacob @ 2021-04-06 7:39 UTC (permalink / raw)
To: Ori Kam
Cc: Ajit Khaparde, Andrew Rybchenko, Ferruh Yigit, Thomas Monjalon,
dpdk-dev, Jerin Jacob, Olivier Matz, Viacheslav Ovsiienko
On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com> wrote:
>
> Currently, DPDK application can offload the checksum check,
> and report it in the mbuf.
>
> However, as more and more applications are offloading some or all
> logic and action to the HW, there is a need to check the packet
> integrity so the right decision can be taken.
>
> The application logic can be positive meaning if the packet is
> valid jump / do actions, or negative if packet is not valid
> jump to SW / do actions (like drop) a, and add default flow
> (match all in low priority) that will direct the miss packet
> to the miss path.
>
> Since currenlty rte_flow works in positive way the assumtion is
> that the postive way will be the common way in this case also.
>
> When thinking what is the best API to implement such feature,
> we need to considure the following (in no specific order):
> 1. API breakage.
> 2. Simplicity.
> 3. Performance.
> 4. HW capabilities.
> 5. rte_flow limitation.
> 6. Flexability.
Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
queue attribute.
Not sure about other Vendor HW.
>
> First option: Add integrity flags to each of the items.
> For example add checksum_ok to ipv4 item.
>
> Pros:
> 1. No new rte_flow item.
> 2. Simple in the way that on each item the app can see
> what checks are available.
>
> Cons:
> 1. API breakage.
> 2. increase number of flows, since app can't add global rule and
> must have dedicated flow for each of the flow combinations, for example
> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> result in 5 flows.
>
> Second option: dedicated item
>
> Pros:
> 1. No API breakage, and there will be no for some time due to having
> extra space. (by using bits)
> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> IPv6.
> 3. Simplicity application can just look at one place to see all possible
> checks.
> 4. Allow future support for more tests.
>
> Cons:
> 1. New item, that holds number of fields from different items.
>
> For starter the following bits are suggested:
> 1. packet_ok - means that all HW checks depending on packet layer have
> passed. This may mean that in some HW such flow should be splited to
> number of flows or fail.
> 2. l2_ok - all check flor layer 2 have passed.
> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> l3 layer this check shoudl fail.
> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> have l4 layer this check should fail.
> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> be 0 and the l3_ok will be 0.
> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> 7. l4_csum_ok - layer 4 checksum is O.K.
> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> packet len.
>
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> lib/librte_ethdev/rte_flow.h | 46 ++++++++++++++++++++++++++++++++++++++
> 2 files changed, 65 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index aec2ba1..58b116e 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +
> +- ``level``: the encapsulation level that should be checked. level 0 means the
> + default PMD mode (Can be inner most / outermost). value of 1 means outermost
> + and higher value means inner header. See also RSS level.
> +- ``packet_ok``: All HW packet integrity checks have passed based on the max
> + layer of the packet.
> + layer of the packet.
> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> +- ``l2_crc_ok``: layer 2 crc check passed.
> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> +- ``l4_csum_ok``: layer 4 checksum check passed.
> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> +
> Actions
> ~~~~~~~
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 6cc5713..f6888a1 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches on packet integrity.
> + *
> + * See struct rte_flow_item_packet_integrity_checks.
> + */
> + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> };
>
> /**
> @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> };
> #endif
>
> +struct rte_flow_item_packet_integrity_checks {
> + uint32_t level;
> + /**< Packet encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
> +RTE_STD_C11
> + union {
> + struct {
> + uint64_t packet_ok:1;
> + /** The packet is valid after passing all HW checks. */
> + uint64_t l2_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l2_crc_ok:1;
> + /**< L2 layer checksum is valid. */
> + uint64_t ipv4_csum_ok:1;
> + /**< L3 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l3_len_ok:1;
> + /**< The l3 len is smaller than the packet len. */
> + uint64_t reserved:56;
> + };
> + uint64_t value;
> + };
> +};
> +
> +#ifndef __cplusplus
> +static const struct rte_flow_item_sanity_checks
> + rte_flow_item_sanity_checks_mask = {
> + .value = 0,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> --
> 1.8.3.1
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-06 7:39 ` Jerin Jacob
@ 2021-04-07 10:32 ` Ori Kam
2021-04-07 11:01 ` Jerin Jacob
` (6 more replies)
0 siblings, 7 replies; 68+ messages in thread
From: Ori Kam @ 2021-04-07 10:32 UTC (permalink / raw)
To: Jerin Jacob
Cc: Ajit Khaparde, Andrew Rybchenko, Ferruh Yigit,
NBU-Contact-Thomas Monjalon, dpdk-dev, Jerin Jacob, Olivier Matz,
Slava Ovsiienko
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Sent: Tuesday, April 6, 2021 10:40 AM
> To: Ori Kam <orika@nvidia.com>
> Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
>
> On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com> wrote:
> >
> > Currently, DPDK application can offload the checksum check,
> > and report it in the mbuf.
> >
> > However, as more and more applications are offloading some or all
> > logic and action to the HW, there is a need to check the packet
> > integrity so the right decision can be taken.
> >
> > The application logic can be positive meaning if the packet is
> > valid jump / do actions, or negative if packet is not valid
> > jump to SW / do actions (like drop) a, and add default flow
> > (match all in low priority) that will direct the miss packet
> > to the miss path.
> >
> > Since currenlty rte_flow works in positive way the assumtion is
> > that the postive way will be the common way in this case also.
> >
> > When thinking what is the best API to implement such feature,
> > we need to considure the following (in no specific order):
> > 1. API breakage.
> > 2. Simplicity.
> > 3. Performance.
> > 4. HW capabilities.
> > 5. rte_flow limitation.
> > 6. Flexability.
>
>
> Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
> queue attribute.
> Not sure about other Vendor HW.
I'm not sure what do you mean?
This is the idea of the patch, to allow application to route the packet
before getting to the Rx,
In any case all items support is dependent on HW capabilities.
>
>
> >
> > First option: Add integrity flags to each of the items.
> > For example add checksum_ok to ipv4 item.
> >
> > Pros:
> > 1. No new rte_flow item.
> > 2. Simple in the way that on each item the app can see
> > what checks are available.
> >
> > Cons:
> > 1. API breakage.
> > 2. increase number of flows, since app can't add global rule and
> > must have dedicated flow for each of the flow combinations, for example
> > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > result in 5 flows.
> >
> > Second option: dedicated item
> >
> > Pros:
> > 1. No API breakage, and there will be no for some time due to having
> > extra space. (by using bits)
> > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > IPv6.
> > 3. Simplicity application can just look at one place to see all possible
> > checks.
> > 4. Allow future support for more tests.
> >
> > Cons:
> > 1. New item, that holds number of fields from different items.
> >
> > For starter the following bits are suggested:
> > 1. packet_ok - means that all HW checks depending on packet layer have
> > passed. This may mean that in some HW such flow should be splited to
> > number of flows or fail.
> > 2. l2_ok - all check flor layer 2 have passed.
> > 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> > l3 layer this check shoudl fail.
> > 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> > have l4 layer this check should fail.
> > 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> > be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> > be 0 and the l3_ok will be 0.
> > 6. ipv4_csum_ok - IPv4 checksum is O.K.
> > 7. l4_csum_ok - layer 4 checksum is O.K.
> > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > packet len.
> >
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> > lib/librte_ethdev/rte_flow.h | 46
> ++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 65 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index aec2ba1..58b116e 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``PACKET_INTEGRITY_CHECKS``
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches packet integrity.
> > +
> > +- ``level``: the encapsulation level that should be checked. level 0 means the
> > + default PMD mode (Can be inner most / outermost). value of 1 means
> outermost
> > + and higher value means inner header. See also RSS level.
> > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> max
> > + layer of the packet.
> > + layer of the packet.
> > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > +- ``l4_ok``: all layer 3 HW integrity checks passed.
> > +- ``l2_crc_ok``: layer 2 crc check passed.
> > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> > +
> > Actions
> > ~~~~~~~
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index 6cc5713..f6888a1 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches on packet integrity.
> > + *
> > + * See struct rte_flow_item_packet_integrity_checks.
> > + */
> > + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> > };
> >
> > /**
> > @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> > };
> > #endif
> >
> > +struct rte_flow_item_packet_integrity_checks {
> > + uint32_t level;
> > + /**< Packet encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
> > +RTE_STD_C11
> > + union {
> > + struct {
> > + uint64_t packet_ok:1;
> > + /** The packet is valid after passing all HW checks. */
> > + uint64_t l2_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l2_crc_ok:1;
> > + /**< L2 layer checksum is valid. */
> > + uint64_t ipv4_csum_ok:1;
> > + /**< L3 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l3_len_ok:1;
> > + /**< The l3 len is smaller than the packet len. */
> > + uint64_t reserved:56;
> > + };
> > + uint64_t value;
> > + };
> > +};
> > +
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_sanity_checks
> > + rte_flow_item_sanity_checks_mask = {
> > + .value = 0,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > --
> > 1.8.3.1
> >
Best,
Ori
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-07 10:32 ` Ori Kam
@ 2021-04-07 11:01 ` Jerin Jacob
2021-04-07 22:15 ` Ori Kam
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 0/2] " Gregory Etelson
` (5 subsequent siblings)
6 siblings, 1 reply; 68+ messages in thread
From: Jerin Jacob @ 2021-04-07 11:01 UTC (permalink / raw)
To: Ori Kam
Cc: Ajit Khaparde, Andrew Rybchenko, Ferruh Yigit,
NBU-Contact-Thomas Monjalon, dpdk-dev, Jerin Jacob, Olivier Matz,
Slava Ovsiienko
On Wed, Apr 7, 2021 at 4:02 PM Ori Kam <orika@nvidia.com> wrote:
>
> Hi Jerin,
Hi Ori,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Sent: Tuesday, April 6, 2021 10:40 AM
> > To: Ori Kam <orika@nvidia.com>
> > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> >
> > On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com> wrote:
> > >
> > > Currently, DPDK application can offload the checksum check,
> > > and report it in the mbuf.
> > >
> > > However, as more and more applications are offloading some or all
> > > logic and action to the HW, there is a need to check the packet
> > > integrity so the right decision can be taken.
> > >
> > > The application logic can be positive meaning if the packet is
> > > valid jump / do actions, or negative if packet is not valid
> > > jump to SW / do actions (like drop) a, and add default flow
> > > (match all in low priority) that will direct the miss packet
> > > to the miss path.
> > >
> > > Since currenlty rte_flow works in positive way the assumtion is
> > > that the postive way will be the common way in this case also.
> > >
> > > When thinking what is the best API to implement such feature,
> > > we need to considure the following (in no specific order):
> > > 1. API breakage.
> > > 2. Simplicity.
> > > 3. Performance.
> > > 4. HW capabilities.
> > > 5. rte_flow limitation.
> > > 6. Flexability.
> >
> >
> > Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
> > queue attribute.
> > Not sure about other Vendor HW.
>
> I'm not sure what do you mean?
What I meant is, What will be the preferred way to configure the feature?
ie. Is it as ethdev Rx offload or rte_flow?
I think, in order to decide that, we need to know, how most of the
other HW express this feature?
> This is the idea of the patch, to allow application to route the packet
> before getting to the Rx,
> In any case all items support is dependent on HW capabilities.
>
>
> >
> >
> > >
> > > First option: Add integrity flags to each of the items.
> > > For example add checksum_ok to ipv4 item.
> > >
> > > Pros:
> > > 1. No new rte_flow item.
> > > 2. Simple in the way that on each item the app can see
> > > what checks are available.
> > >
> > > Cons:
> > > 1. API breakage.
> > > 2. increase number of flows, since app can't add global rule and
> > > must have dedicated flow for each of the flow combinations, for example
> > > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > > result in 5 flows.
> > >
> > > Second option: dedicated item
> > >
> > > Pros:
> > > 1. No API breakage, and there will be no for some time due to having
> > > extra space. (by using bits)
> > > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > > IPv6.
> > > 3. Simplicity application can just look at one place to see all possible
> > > checks.
> > > 4. Allow future support for more tests.
> > >
> > > Cons:
> > > 1. New item, that holds number of fields from different items.
> > >
> > > For starter the following bits are suggested:
> > > 1. packet_ok - means that all HW checks depending on packet layer have
> > > passed. This may mean that in some HW such flow should be splited to
> > > number of flows or fail.
> > > 2. l2_ok - all check flor layer 2 have passed.
> > > 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> > > l3 layer this check shoudl fail.
> > > 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> > > have l4 layer this check should fail.
> > > 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> > > be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> > > be 0 and the l3_ok will be 0.
> > > 6. ipv4_csum_ok - IPv4 checksum is O.K.
> > > 7. l4_csum_ok - layer 4 checksum is O.K.
> > > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > > packet len.
> > >
> > > Example of usage:
> > > 1. check packets from all possible layers for integrity.
> > > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> > >
> > > 2. Check only packet with layer 4 (UDP / TCP)
> > > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> > >
> > > Signed-off-by: Ori Kam <orika@nvidia.com>
> > > ---
> > > doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> > > lib/librte_ethdev/rte_flow.h | 46
> > ++++++++++++++++++++++++++++++++++++++
> > > 2 files changed, 65 insertions(+)
> > >
> > > diff --git a/doc/guides/prog_guide/rte_flow.rst
> > b/doc/guides/prog_guide/rte_flow.rst
> > > index aec2ba1..58b116e 100644
> > > --- a/doc/guides/prog_guide/rte_flow.rst
> > > +++ b/doc/guides/prog_guide/rte_flow.rst
> > > @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> > > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > > - Default ``mask`` matches nothing, for all eCPRI messages.
> > >
> > > +Item: ``PACKET_INTEGRITY_CHECKS``
> > > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > +
> > > +Matches packet integrity.
> > > +
> > > +- ``level``: the encapsulation level that should be checked. level 0 means the
> > > + default PMD mode (Can be inner most / outermost). value of 1 means
> > outermost
> > > + and higher value means inner header. See also RSS level.
> > > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> > max
> > > + layer of the packet.
> > > + layer of the packet.
> > > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > > +- ``l4_ok``: all layer 3 HW integrity checks passed.
> > > +- ``l2_crc_ok``: layer 2 crc check passed.
> > > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > > +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> > > +
> > > Actions
> > > ~~~~~~~
> > >
> > > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > > index 6cc5713..f6888a1 100644
> > > --- a/lib/librte_ethdev/rte_flow.h
> > > +++ b/lib/librte_ethdev/rte_flow.h
> > > @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > > * See struct rte_flow_item_geneve_opt
> > > */
> > > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > > +
> > > + /**
> > > + * [META]
> > > + *
> > > + * Matches on packet integrity.
> > > + *
> > > + * See struct rte_flow_item_packet_integrity_checks.
> > > + */
> > > + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> > > };
> > >
> > > /**
> > > @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> > > };
> > > #endif
> > >
> > > +struct rte_flow_item_packet_integrity_checks {
> > > + uint32_t level;
> > > + /**< Packet encapsulation level the item should apply to.
> > > + * @see rte_flow_action_rss
> > > + */
> > > +RTE_STD_C11
> > > + union {
> > > + struct {
> > > + uint64_t packet_ok:1;
> > > + /** The packet is valid after passing all HW checks. */
> > > + uint64_t l2_ok:1;
> > > + /**< L2 layer is valid after passing all HW checks. */
> > > + uint64_t l3_ok:1;
> > > + /**< L3 layer is valid after passing all HW checks. */
> > > + uint64_t l4_ok:1;
> > > + /**< L4 layer is valid after passing all HW checks. */
> > > + uint64_t l2_crc_ok:1;
> > > + /**< L2 layer checksum is valid. */
> > > + uint64_t ipv4_csum_ok:1;
> > > + /**< L3 layer checksum is valid. */
> > > + uint64_t l4_csum_ok:1;
> > > + /**< L4 layer checksum is valid. */
> > > + uint64_t l3_len_ok:1;
> > > + /**< The l3 len is smaller than the packet len. */
> > > + uint64_t reserved:56;
> > > + };
> > > + uint64_t value;
> > > + };
> > > +};
> > > +
> > > +#ifndef __cplusplus
> > > +static const struct rte_flow_item_sanity_checks
> > > + rte_flow_item_sanity_checks_mask = {
> > > + .value = 0,
> > > +};
> > > +#endif
> > > +
> > > /**
> > > * Matching pattern item definition.
> > > *
> > > --
> > > 1.8.3.1
> > >
>
> Best,
> Ori
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-07 11:01 ` Jerin Jacob
@ 2021-04-07 22:15 ` Ori Kam
2021-04-08 7:44 ` Jerin Jacob
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-07 22:15 UTC (permalink / raw)
To: Jerin Jacob
Cc: Ajit Khaparde, Andrew Rybchenko, Ferruh Yigit,
NBU-Contact-Thomas Monjalon, dpdk-dev, Jerin Jacob, Olivier Matz,
Slava Ovsiienko
Hi Jerin,
> -----Original Message-----
> From: Jerin Jacob <jerinjacobk@gmail.com>
> Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
>
> On Wed, Apr 7, 2021 at 4:02 PM Ori Kam <orika@nvidia.com> wrote:
> >
> > Hi Jerin,
>
> Hi Ori,
>
>
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Sent: Tuesday, April 6, 2021 10:40 AM
> > > To: Ori Kam <orika@nvidia.com>
> > > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> > >
> > > On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com> wrote:
> > > >
> > > > Currently, DPDK application can offload the checksum check,
> > > > and report it in the mbuf.
> > > >
> > > > However, as more and more applications are offloading some or all
> > > > logic and action to the HW, there is a need to check the packet
> > > > integrity so the right decision can be taken.
> > > >
> > > > The application logic can be positive meaning if the packet is
> > > > valid jump / do actions, or negative if packet is not valid
> > > > jump to SW / do actions (like drop) a, and add default flow
> > > > (match all in low priority) that will direct the miss packet
> > > > to the miss path.
> > > >
> > > > Since currenlty rte_flow works in positive way the assumtion is
> > > > that the postive way will be the common way in this case also.
> > > >
> > > > When thinking what is the best API to implement such feature,
> > > > we need to considure the following (in no specific order):
> > > > 1. API breakage.
> > > > 2. Simplicity.
> > > > 3. Performance.
> > > > 4. HW capabilities.
> > > > 5. rte_flow limitation.
> > > > 6. Flexability.
> > >
> > >
> > > Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
> > > queue attribute.
> > > Not sure about other Vendor HW.
> >
> > I'm not sure what do you mean?
>
> What I meant is, What will be the preferred way to configure the feature?
> ie. Is it as ethdev Rx offload or rte_flow?
>
> I think, in order to decide that, we need to know, how most of the
> other HW express this feature?
>
As I see it both ways could be used,
Maybe even by the same app,
One flow is to notify the application when it sees the packet
(RX offload) and one is to use as an item to route the packet
when using rte_flow.
Maybe I'm missing something, on your suggestion how will
application route the packets? Or it will just receive them with flags
on the RX queue?
>
> > This is the idea of the patch, to allow application to route the packet
> > before getting to the Rx,
> > In any case all items support is dependent on HW capabilities.
>
>
>
>
Best,
Ori
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-07 22:15 ` Ori Kam
@ 2021-04-08 7:44 ` Jerin Jacob
2021-04-11 4:12 ` Ajit Khaparde
0 siblings, 1 reply; 68+ messages in thread
From: Jerin Jacob @ 2021-04-08 7:44 UTC (permalink / raw)
To: Ori Kam
Cc: Ajit Khaparde, Andrew Rybchenko, Ferruh Yigit,
NBU-Contact-Thomas Monjalon, dpdk-dev, Jerin Jacob, Olivier Matz,
Slava Ovsiienko
On Thu, Apr 8, 2021 at 3:45 AM Ori Kam <orika@nvidia.com> wrote:
>
> Hi Jerin,
>
> > -----Original Message-----
> > From: Jerin Jacob <jerinjacobk@gmail.com>
> > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> >
> > On Wed, Apr 7, 2021 at 4:02 PM Ori Kam <orika@nvidia.com> wrote:
> > >
> > > Hi Jerin,
> >
> > Hi Ori,
> >
> >
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Sent: Tuesday, April 6, 2021 10:40 AM
> > > > To: Ori Kam <orika@nvidia.com>
> > > > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> > > >
> > > > On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com> wrote:
> > > > >
> > > > > Currently, DPDK application can offload the checksum check,
> > > > > and report it in the mbuf.
> > > > >
> > > > > However, as more and more applications are offloading some or all
> > > > > logic and action to the HW, there is a need to check the packet
> > > > > integrity so the right decision can be taken.
> > > > >
> > > > > The application logic can be positive meaning if the packet is
> > > > > valid jump / do actions, or negative if packet is not valid
> > > > > jump to SW / do actions (like drop) a, and add default flow
> > > > > (match all in low priority) that will direct the miss packet
> > > > > to the miss path.
> > > > >
> > > > > Since currenlty rte_flow works in positive way the assumtion is
> > > > > that the postive way will be the common way in this case also.
> > > > >
> > > > > When thinking what is the best API to implement such feature,
> > > > > we need to considure the following (in no specific order):
> > > > > 1. API breakage.
> > > > > 2. Simplicity.
> > > > > 3. Performance.
> > > > > 4. HW capabilities.
> > > > > 5. rte_flow limitation.
> > > > > 6. Flexability.
> > > >
> > > >
> > > > Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
> > > > queue attribute.
> > > > Not sure about other Vendor HW.
> > >
> > > I'm not sure what do you mean?
> >
> > What I meant is, What will be the preferred way to configure the feature?
> > ie. Is it as ethdev Rx offload or rte_flow?
> >
> > I think, in order to decide that, we need to know, how most of the
> > other HW express this feature?
> >
>
> As I see it both ways could be used,
> Maybe even by the same app,
>
> One flow is to notify the application when it sees the packet
> (RX offload) and one is to use as an item to route the packet
> when using rte_flow.
>
> Maybe I'm missing something, on your suggestion how will
> application route the packets? Or it will just receive them with flags
> on the RX queue?
Just receive them with flags on the Rx queue, in order to avoid
duplicating features
in multiple places.
>
>
>
> >
> > > This is the idea of the patch, to allow application to route the packet
> > > before getting to the Rx,
> > > In any case all items support is dependent on HW capabilities.
> >
> >
> >
> >
>
> Best,
> Ori
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-05 18:04 [dpdk-dev] [PATCH] ethdev: add packet integrity checks Ori Kam
2021-04-06 7:39 ` Jerin Jacob
@ 2021-04-08 8:04 ` Andrew Rybchenko
2021-04-08 11:39 ` Ori Kam
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 0/2] " Gregory Etelson
2 siblings, 1 reply; 68+ messages in thread
From: Andrew Rybchenko @ 2021-04-08 8:04 UTC (permalink / raw)
To: Ori Kam, ajit.khaparde, ferruh.yigit, thomas
Cc: dev, jerinj, olivier.matz, viacheslavo
On 4/5/21 9:04 PM, Ori Kam wrote:
> Currently, DPDK application can offload the checksum check,
> and report it in the mbuf.
>
> However, as more and more applications are offloading some or all
> logic and action to the HW, there is a need to check the packet
> integrity so the right decision can be taken.
>
> The application logic can be positive meaning if the packet is
> valid jump / do actions, or negative if packet is not valid
> jump to SW / do actions (like drop) a, and add default flow
> (match all in low priority) that will direct the miss packet
> to the miss path.
>
> Since currenlty rte_flow works in positive way the assumtion is
> that the postive way will be the common way in this case also.
>
> When thinking what is the best API to implement such feature,
> we need to considure the following (in no specific order):
> 1. API breakage.
First of all I disagree that "API breakage" is put as a top
priority. Design is a top priority, since it is a long term.
aPI breakage is just a short term inconvenient. Of course,
others may disagree, but that's my point of view.
> 2. Simplicity.
> 3. Performance.
> 4. HW capabilities.
> 5. rte_flow limitation.
> 6. Flexability.
>
> First option: Add integrity flags to each of the items.
> For example add checksum_ok to ipv4 item.
>
> Pros:
> 1. No new rte_flow item.
> 2. Simple in the way that on each item the app can see
> what checks are available.
3. Natively supports various tunnels without any extra
changes in a shared item for all layers.
>
> Cons:
> 1. API breakage.
> 2. increase number of flows, since app can't add global rule and
> must have dedicated flow for each of the flow combinations, for example
> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> result in 5 flows.
Could you expand it? Shouldn't HW offloaded flows with good
checksums go into dedicated queues where as bad packets go
via default path (i.e. no extra rules)?
>
> Second option: dedicated item
>
> Pros:
> 1. No API breakage, and there will be no for some time due to having
> extra space. (by using bits)
> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> IPv6.
It depends on how bad (or good0 packets are handled.
> 3. Simplicity application can just look at one place to see all possible
> checks.
It is a drawback from my point of view, since IPv4 checksum
check is out of IPv4 match item. I.e. analyzing IPv4 you should
take a look at 2 different flow items.
> 4. Allow future support for more tests.
It is the same for both solution since per-item solution
can keep reserved bits which may be used in the future.
>
> Cons:
> 1. New item, that holds number of fields from different items.
2. Not that nice for tunnels.
>
> For starter the following bits are suggested:
> 1. packet_ok - means that all HW checks depending on packet layer have
> passed. This may mean that in some HW such flow should be splited to
> number of flows or fail.
> 2. l2_ok - all check flor layer 2 have passed.
> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> l3 layer this check shoudl fail.
> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> have l4 layer this check should fail.
> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> be 0 and the l3_ok will be 0.
> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> 7. l4_csum_ok - layer 4 checksum is O.K.
> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> packet len.
>
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> lib/librte_ethdev/rte_flow.h | 46 ++++++++++++++++++++++++++++++++++++++
> 2 files changed, 65 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index aec2ba1..58b116e 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +
> +- ``level``: the encapsulation level that should be checked. level 0 means the
> + default PMD mode (Can be inner most / outermost). value of 1 means outermost
> + and higher value means inner header. See also RSS level.
> +- ``packet_ok``: All HW packet integrity checks have passed based on the max
> + layer of the packet.
> + layer of the packet.
> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> +- ``l2_crc_ok``: layer 2 crc check passed.
> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> +- ``l4_csum_ok``: layer 4 checksum check passed.
> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> +
> Actions
> ~~~~~~~
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 6cc5713..f6888a1 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches on packet integrity.
> + *
> + * See struct rte_flow_item_packet_integrity_checks.
> + */
> + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> };
>
> /**
> @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> };
> #endif
>
> +struct rte_flow_item_packet_integrity_checks {
> + uint32_t level;
> + /**< Packet encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
> +RTE_STD_C11
> + union {
> + struct {
> + uint64_t packet_ok:1;
> + /** The packet is valid after passing all HW checks. */
> + uint64_t l2_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l2_crc_ok:1;
> + /**< L2 layer checksum is valid. */
> + uint64_t ipv4_csum_ok:1;
> + /**< L3 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l3_len_ok:1;
> + /**< The l3 len is smaller than the packet len. */
> + uint64_t reserved:56;
> + };
> + uint64_t value;
> + };
> +};
> +
> +#ifndef __cplusplus
> +static const struct rte_flow_item_sanity_checks
> + rte_flow_item_sanity_checks_mask = {
> + .value = 0,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-08 8:04 ` [dpdk-dev] [PATCH] ethdev: add packet integrity checks Andrew Rybchenko
@ 2021-04-08 11:39 ` Ori Kam
2021-04-09 8:08 ` Andrew Rybchenko
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-08 11:39 UTC (permalink / raw)
To: Andrew Rybchenko, ajit.khaparde, ferruh.yigit,
NBU-Contact-Thomas Monjalon
Cc: dev, jerinj, olivier.matz, Slava Ovsiienko
Hi Andrew,
Thanks for your comments.
PSB,
Best,
Ori
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> Sent: Thursday, April 8, 2021 11:05 AM
> Subject: Re: [PATCH] ethdev: add packet integrity checks
>
> On 4/5/21 9:04 PM, Ori Kam wrote:
> > Currently, DPDK application can offload the checksum check,
> > and report it in the mbuf.
> >
> > However, as more and more applications are offloading some or all
> > logic and action to the HW, there is a need to check the packet
> > integrity so the right decision can be taken.
> >
> > The application logic can be positive meaning if the packet is
> > valid jump / do actions, or negative if packet is not valid
> > jump to SW / do actions (like drop) a, and add default flow
> > (match all in low priority) that will direct the miss packet
> > to the miss path.
> >
> > Since currenlty rte_flow works in positive way the assumtion is
> > that the postive way will be the common way in this case also.
> >
> > When thinking what is the best API to implement such feature,
> > we need to considure the following (in no specific order):
> > 1. API breakage.
>
> First of all I disagree that "API breakage" is put as a top
> priority. Design is a top priority, since it is a long term.
> aPI breakage is just a short term inconvenient. Of course,
> others may disagree, but that's my point of view.
>
I agree with you, and like I said the order of the list is not
according to priorities.
I truly believe that what I'm suggesting is the best design.
> > 2. Simplicity.
> > 3. Performance.
> > 4. HW capabilities.
> > 5. rte_flow limitation.
> > 6. Flexability.
> >
> > First option: Add integrity flags to each of the items.
> > For example add checksum_ok to ipv4 item.
> >
> > Pros:
> > 1. No new rte_flow item.
> > 2. Simple in the way that on each item the app can see
> > what checks are available.
>
> 3. Natively supports various tunnels without any extra
> changes in a shared item for all layers.
>
Also in the current suggested approach, we have the level member,
So tunnels are supported by default. If someone wants to check also tunnel
he just need to add this item again with the right level. (just like with other
items)
> >
> > Cons:
> > 1. API breakage.
> > 2. increase number of flows, since app can't add global rule and
> > must have dedicated flow for each of the flow combinations, for example
> > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > result in 5 flows.
>
> Could you expand it? Shouldn't HW offloaded flows with good
> checksums go into dedicated queues where as bad packets go
> via default path (i.e. no extra rules)?
>
I'm not sure what do you mean, in a lot of the cases
Application will use that to detect valid packets and then
forward only valid packets down the flow. (check valid jump
--> on next group decap ....)
In other cases the app may choose to drop the bad packets or count
and then drop, maybe sample them to check this is not part of an attack.
This is what is great about this feature we just give the app
the ability to offload the sanity checks and be that enables it
to offload the traffic itself
> >
> > Second option: dedicated item
> >
> > Pros:
> > 1. No API breakage, and there will be no for some time due to having
> > extra space. (by using bits)
> > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > IPv6.
>
> It depends on how bad (or good0 packets are handled.
>
Not sure what do you mean,
> > 3. Simplicity application can just look at one place to see all possible
> > checks.
>
> It is a drawback from my point of view, since IPv4 checksum
> check is out of IPv4 match item. I.e. analyzing IPv4 you should
> take a look at 2 different flow items.
>
Are you talking from application view point, PMD or HW?
From application yes it is true he needs to add one more item
to the list, (depending on his flows, since he can have just
one flow that checks all packet like I said and move the good
ones to a different group and in that group he will match the
ipv4 item.
For example:
... pattern integrity = valid action jump group 3
Group 3 pattern .... ipv4 ... actions .....
Group 3 pattern ....ipv6 .... actions ...
In any case at worse case it is just adding one more item
to the flow.
From PMD/HW extra items doesn't mean extra action in HW
they can be combined, just like they would have it the
condition was in the item itself.
> > 4. Allow future support for more tests.
>
> It is the same for both solution since per-item solution
> can keep reserved bits which may be used in the future.
>
Yes I agree,
> >
> > Cons:
> > 1. New item, that holds number of fields from different items.
>
> 2. Not that nice for tunnels.
Please look at above (not direct ) response since we have the level member
tunnels are handled very nicely.
>
> >
> > For starter the following bits are suggested:
> > 1. packet_ok - means that all HW checks depending on packet layer have
> > passed. This may mean that in some HW such flow should be splited to
> > number of flows or fail.
> > 2. l2_ok - all check flor layer 2 have passed.
> > 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> > l3 layer this check shoudl fail.
> > 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> > have l4 layer this check should fail.
> > 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> > be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> > be 0 and the l3_ok will be 0.
> > 6. ipv4_csum_ok - IPv4 checksum is O.K.
> > 7. l4_csum_ok - layer 4 checksum is O.K.
> > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > packet len.
> >
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> > lib/librte_ethdev/rte_flow.h | 46
> ++++++++++++++++++++++++++++++++++++++
> > 2 files changed, 65 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index aec2ba1..58b116e 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``PACKET_INTEGRITY_CHECKS``
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches packet integrity.
> > +
> > +- ``level``: the encapsulation level that should be checked. level 0 means the
> > + default PMD mode (Can be inner most / outermost). value of 1 means
> outermost
> > + and higher value means inner header. See also RSS level.
> > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> max
> > + layer of the packet.
> > + layer of the packet.
> > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > +- ``l4_ok``: all layer 3 HW integrity checks passed.
> > +- ``l2_crc_ok``: layer 2 crc check passed.
> > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> > +
> > Actions
> > ~~~~~~~
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index 6cc5713..f6888a1 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches on packet integrity.
> > + *
> > + * See struct rte_flow_item_packet_integrity_checks.
> > + */
> > + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> > };
> >
> > /**
> > @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> > };
> > #endif
> >
> > +struct rte_flow_item_packet_integrity_checks {
> > + uint32_t level;
> > + /**< Packet encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
> > +RTE_STD_C11
> > + union {
> > + struct {
> > + uint64_t packet_ok:1;
> > + /** The packet is valid after passing all HW checks. */
> > + uint64_t l2_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l2_crc_ok:1;
> > + /**< L2 layer checksum is valid. */
> > + uint64_t ipv4_csum_ok:1;
> > + /**< L3 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l3_len_ok:1;
> > + /**< The l3 len is smaller than the packet len. */
> > + uint64_t reserved:56;
> > + };
> > + uint64_t value;
> > + };
> > +};
> > +
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_sanity_checks
> > + rte_flow_item_sanity_checks_mask = {
> > + .value = 0,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-08 11:39 ` Ori Kam
@ 2021-04-09 8:08 ` Andrew Rybchenko
2021-04-11 6:42 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Andrew Rybchenko @ 2021-04-09 8:08 UTC (permalink / raw)
To: Ori Kam, ajit.khaparde, ferruh.yigit, NBU-Contact-Thomas Monjalon
Cc: dev, jerinj, olivier.matz, Slava Ovsiienko
On 4/8/21 2:39 PM, Ori Kam wrote:
> Hi Andrew,
>
> Thanks for your comments.
>
> PSB,
>
> Best,
> Ori
>
>> -----Original Message-----
>> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>> Sent: Thursday, April 8, 2021 11:05 AM
>> Subject: Re: [PATCH] ethdev: add packet integrity checks
>>
>> On 4/5/21 9:04 PM, Ori Kam wrote:
>>> Currently, DPDK application can offload the checksum check,
>>> and report it in the mbuf.
>>>
>>> However, as more and more applications are offloading some or all
>>> logic and action to the HW, there is a need to check the packet
>>> integrity so the right decision can be taken.
>>>
>>> The application logic can be positive meaning if the packet is
>>> valid jump / do actions, or negative if packet is not valid
>>> jump to SW / do actions (like drop) a, and add default flow
>>> (match all in low priority) that will direct the miss packet
>>> to the miss path.
>>>
>>> Since currenlty rte_flow works in positive way the assumtion is
>>> that the postive way will be the common way in this case also.
>>>
>>> When thinking what is the best API to implement such feature,
>>> we need to considure the following (in no specific order):
>>> 1. API breakage.
>>
>> First of all I disagree that "API breakage" is put as a top
>> priority. Design is a top priority, since it is a long term.
>> aPI breakage is just a short term inconvenient. Of course,
>> others may disagree, but that's my point of view.
>>
> I agree with you, and like I said the order of the list is not
> according to priorities.
> I truly believe that what I'm suggesting is the best design.
>
>
>>> 2. Simplicity.
>>> 3. Performance.
>>> 4. HW capabilities.
>>> 5. rte_flow limitation.
>>> 6. Flexability.
>>>
>>> First option: Add integrity flags to each of the items.
>>> For example add checksum_ok to ipv4 item.
>>>
>>> Pros:
>>> 1. No new rte_flow item.
>>> 2. Simple in the way that on each item the app can see
>>> what checks are available.
>>
>> 3. Natively supports various tunnels without any extra
>> changes in a shared item for all layers.
>>
> Also in the current suggested approach, we have the level member,
> So tunnels are supported by default. If someone wants to check also tunnel
> he just need to add this item again with the right level. (just like with other
> items)
Thanks, missed it. Is it OK to have just one item with
level 1 or 2?
What happens if two items with level 0 and level 1 are
specified, but the packet has no encapsulation?
>>>
>>> Cons:
>>> 1. API breakage.
>>> 2. increase number of flows, since app can't add global rule and
>>> must have dedicated flow for each of the flow combinations, for example
>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
>>> result in 5 flows.
>>
>> Could you expand it? Shouldn't HW offloaded flows with good
>> checksums go into dedicated queues where as bad packets go
>> via default path (i.e. no extra rules)?
>>
> I'm not sure what do you mean, in a lot of the cases
> Application will use that to detect valid packets and then
> forward only valid packets down the flow. (check valid jump
> --> on next group decap ....)
> In other cases the app may choose to drop the bad packets or count
> and then drop, maybe sample them to check this is not part of an attack.
>
> This is what is great about this feature we just give the app
> the ability to offload the sanity checks and be that enables it
> to offload the traffic itself
Please, when you say "increase number of flows... in 5 flows"
just try to express in flow rules in both cases. Just for my
understanding. Since you calculated flows you should have a
real example.
>>>
>>> Second option: dedicated item
>>>
>>> Pros:
>>> 1. No API breakage, and there will be no for some time due to having
>>> extra space. (by using bits)
>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>> IPv6.
>>
>> It depends on how bad (or good0 packets are handled.
>>
> Not sure what do you mean,
Again, I hope we understand each other when we talk in terms
of real example and flow rules.
>>> 3. Simplicity application can just look at one place to see all possible
>>> checks.
>>
>> It is a drawback from my point of view, since IPv4 checksum
>> check is out of IPv4 match item. I.e. analyzing IPv4 you should
>> take a look at 2 different flow items.
>>
> Are you talking from application view point, PMD or HW?
> From application yes it is true he needs to add one more item
> to the list, (depending on his flows, since he can have just
> one flow that checks all packet like I said and move the good
> ones to a different group and in that group he will match the
> ipv4 item.
> For example:
> ... pattern integrity = valid action jump group 3
> Group 3 pattern .... ipv4 ... actions .....
> Group 3 pattern ....ipv6 .... actions ...
>
> In any case at worse case it is just adding one more item
> to the flow.
>
> From PMD/HW extra items doesn't mean extra action in HW
> they can be combined, just like they would have it the
> condition was in the item itself.
>
>>> 4. Allow future support for more tests.
>>
>> It is the same for both solution since per-item solution
>> can keep reserved bits which may be used in the future.
>>
> Yes I agree,
>
>>>
>>> Cons:
>>> 1. New item, that holds number of fields from different items.
>>
>> 2. Not that nice for tunnels.
>
> Please look at above (not direct ) response since we have the level member
> tunnels are handled very nicely.
>
>>
>>>
>>> For starter the following bits are suggested:
>>> 1. packet_ok - means that all HW checks depending on packet layer have
>>> passed. This may mean that in some HW such flow should be splited to
>>> number of flows or fail.
>>> 2. l2_ok - all check flor layer 2 have passed.
>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
>>> l3 layer this check shoudl fail.
>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
>>> have l4 layer this check should fail.
>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
>>> be 0 and the l3_ok will be 0.
>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
>>> 7. l4_csum_ok - layer 4 checksum is O.K.
>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>> packet len.
>>>
>>> Example of usage:
>>> 1. check packets from all possible layers for integrity.
>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>
>>> 2. Check only packet with layer 4 (UDP / TCP)
>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>>
>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>> ---
>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
>>> lib/librte_ethdev/rte_flow.h | 46
>> ++++++++++++++++++++++++++++++++++++++
>>> 2 files changed, 65 insertions(+)
>>>
>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>> b/doc/guides/prog_guide/rte_flow.rst
>>> index aec2ba1..58b116e 100644
>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>> - Default ``mask`` matches nothing, for all eCPRI messages.
>>>
>>> +Item: ``PACKET_INTEGRITY_CHECKS``
>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> +
>>> +Matches packet integrity.
>>> +
>>> +- ``level``: the encapsulation level that should be checked. level 0 means the
>>> + default PMD mode (Can be inner most / outermost). value of 1 means
>> outermost
>>> + and higher value means inner header. See also RSS level.
>>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
>> max
>>> + layer of the packet.
>>> + layer of the packet.
>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
>>> +- ``l2_crc_ok``: layer 2 crc check passed.
>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
>>> +
>>> Actions
>>> ~~~~~~~
>>>
>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
>>> index 6cc5713..f6888a1 100644
>>> --- a/lib/librte_ethdev/rte_flow.h
>>> +++ b/lib/librte_ethdev/rte_flow.h
>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
>>> * See struct rte_flow_item_geneve_opt
>>> */
>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
>>> +
>>> + /**
>>> + * [META]
>>> + *
>>> + * Matches on packet integrity.
>>> + *
>>> + * See struct rte_flow_item_packet_integrity_checks.
>>> + */
>>> + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
>>> };
>>>
>>> /**
>>> @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
>>> };
>>> #endif
>>>
>>> +struct rte_flow_item_packet_integrity_checks {
>>> + uint32_t level;
>>> + /**< Packet encapsulation level the item should apply to.
>>> + * @see rte_flow_action_rss
>>> + */
>>> +RTE_STD_C11
>>> + union {
>>> + struct {
>>> + uint64_t packet_ok:1;
>>> + /** The packet is valid after passing all HW checks. */
>>> + uint64_t l2_ok:1;
>>> + /**< L2 layer is valid after passing all HW checks. */
>>> + uint64_t l3_ok:1;
>>> + /**< L3 layer is valid after passing all HW checks. */
>>> + uint64_t l4_ok:1;
>>> + /**< L4 layer is valid after passing all HW checks. */
>>> + uint64_t l2_crc_ok:1;
>>> + /**< L2 layer checksum is valid. */
>>> + uint64_t ipv4_csum_ok:1;
>>> + /**< L3 layer checksum is valid. */
>>> + uint64_t l4_csum_ok:1;
>>> + /**< L4 layer checksum is valid. */
>>> + uint64_t l3_len_ok:1;
>>> + /**< The l3 len is smaller than the packet len. */
>>> + uint64_t reserved:56;
>>> + };
>>> + uint64_t value;
>>> + };
>>> +};
>>> +
>>> +#ifndef __cplusplus
>>> +static const struct rte_flow_item_sanity_checks
>>> + rte_flow_item_sanity_checks_mask = {
>>> + .value = 0,
>>> +};
>>> +#endif
>>> +
>>> /**
>>> * Matching pattern item definition.
>>> *
>>>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-08 7:44 ` Jerin Jacob
@ 2021-04-11 4:12 ` Ajit Khaparde
2021-04-11 6:03 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ajit Khaparde @ 2021-04-11 4:12 UTC (permalink / raw)
To: Jerin Jacob
Cc: Ori Kam, Andrew Rybchenko, Ferruh Yigit,
NBU-Contact-Thomas Monjalon, dpdk-dev, Jerin Jacob, Olivier Matz,
Slava Ovsiienko
[-- Attachment #1: Type: text/plain, Size: 3357 bytes --]
On Thu, Apr 8, 2021 at 12:44 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Thu, Apr 8, 2021 at 3:45 AM Ori Kam <orika@nvidia.com> wrote:
> >
> > Hi Jerin,
> >
> > > -----Original Message-----
> > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> > >
> > > On Wed, Apr 7, 2021 at 4:02 PM Ori Kam <orika@nvidia.com> wrote:
> > > >
> > > > Hi Jerin,
> > >
> > > Hi Ori,
> > >
> > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > Sent: Tuesday, April 6, 2021 10:40 AM
> > > > > To: Ori Kam <orika@nvidia.com>
> > > > > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> > > > >
> > > > > On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com> wrote:
> > > > > >
> > > > > > Currently, DPDK application can offload the checksum check,
> > > > > > and report it in the mbuf.
> > > > > >
> > > > > > However, as more and more applications are offloading some or all
> > > > > > logic and action to the HW, there is a need to check the packet
> > > > > > integrity so the right decision can be taken.
> > > > > >
> > > > > > The application logic can be positive meaning if the packet is
> > > > > > valid jump / do actions, or negative if packet is not valid
> > > > > > jump to SW / do actions (like drop) a, and add default flow
> > > > > > (match all in low priority) that will direct the miss packet
> > > > > > to the miss path.
> > > > > >
> > > > > > Since currenlty rte_flow works in positive way the assumtion is
> > > > > > that the postive way will be the common way in this case also.
> > > > > >
> > > > > > When thinking what is the best API to implement such feature,
> > > > > > we need to considure the following (in no specific order):
> > > > > > 1. API breakage.
> > > > > > 2. Simplicity.
> > > > > > 3. Performance.
> > > > > > 4. HW capabilities.
> > > > > > 5. rte_flow limitation.
> > > > > > 6. Flexability.
> > > > >
> > > > >
> > > > > Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
> > > > > queue attribute.
> > > > > Not sure about other Vendor HW.
> > > >
> > > > I'm not sure what do you mean?
> > >
> > > What I meant is, What will be the preferred way to configure the feature?
> > > ie. Is it as ethdev Rx offload or rte_flow?
> > >
> > > I think, in order to decide that, we need to know, how most of the
> > > other HW express this feature?
> > >
> >
> > As I see it both ways could be used,
> > Maybe even by the same app,
> >
> > One flow is to notify the application when it sees the packet
> > (RX offload) and one is to use as an item to route the packet
> > when using rte_flow.
> >
> > Maybe I'm missing something, on your suggestion how will
> > application route the packets? Or it will just receive them with flags
> > on the RX queue?
>
> Just receive them with flags on the Rx queue, in order to avoid
> duplicating features
> in multiple places.
I think this is more reasonable and simpler.
Especially when I read the discussion further in the thread between
Andrew and Ori.
>
> >
> >
> >
> > >
> > > > This is the idea of the patch, to allow application to route the packet
> > > > before getting to the Rx,
> > > > In any case all items support is dependent on HW capabilities.
> > >
> > >
> > >
> > >
> >
> > Best,
> > Ori
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-11 4:12 ` Ajit Khaparde
@ 2021-04-11 6:03 ` Ori Kam
0 siblings, 0 replies; 68+ messages in thread
From: Ori Kam @ 2021-04-11 6:03 UTC (permalink / raw)
To: Ajit Khaparde, Jerin Jacob
Cc: Andrew Rybchenko, Ferruh Yigit, NBU-Contact-Thomas Monjalon,
dpdk-dev, Jerin Jacob, Olivier Matz, Slava Ovsiienko
Hi Jerin & Ajit
> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Sent: Sunday, April 11, 2021 7:13 AM
>
> On Thu, Apr 8, 2021 at 12:44 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
> >
> > On Thu, Apr 8, 2021 at 3:45 AM Ori Kam <orika@nvidia.com> wrote:
> > >
> > > Hi Jerin,
> > >
> > > > -----Original Message-----
> > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> > > >
> > > > On Wed, Apr 7, 2021 at 4:02 PM Ori Kam <orika@nvidia.com> wrote:
> > > > >
> > > > > Hi Jerin,
> > > >
> > > > Hi Ori,
> > > >
> > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Jerin Jacob <jerinjacobk@gmail.com>
> > > > > > Sent: Tuesday, April 6, 2021 10:40 AM
> > > > > > To: Ori Kam <orika@nvidia.com>
> > > > > > Subject: Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
> > > > > >
> > > > > > On Mon, Apr 5, 2021 at 11:35 PM Ori Kam <orika@nvidia.com>
> wrote:
> > > > > > >
> > > > > > > Currently, DPDK application can offload the checksum check,
> > > > > > > and report it in the mbuf.
> > > > > > >
> > > > > > > However, as more and more applications are offloading some or all
> > > > > > > logic and action to the HW, there is a need to check the packet
> > > > > > > integrity so the right decision can be taken.
> > > > > > >
> > > > > > > The application logic can be positive meaning if the packet is
> > > > > > > valid jump / do actions, or negative if packet is not valid
> > > > > > > jump to SW / do actions (like drop) a, and add default flow
> > > > > > > (match all in low priority) that will direct the miss packet
> > > > > > > to the miss path.
> > > > > > >
> > > > > > > Since currenlty rte_flow works in positive way the assumtion is
> > > > > > > that the postive way will be the common way in this case also.
> > > > > > >
> > > > > > > When thinking what is the best API to implement such feature,
> > > > > > > we need to considure the following (in no specific order):
> > > > > > > 1. API breakage.
> > > > > > > 2. Simplicity.
> > > > > > > 3. Performance.
> > > > > > > 4. HW capabilities.
> > > > > > > 5. rte_flow limitation.
> > > > > > > 6. Flexability.
> > > > > >
> > > > > >
> > > > > > Alteast in Marvell HW integrity checks are functions of the Ethdev Rx
> > > > > > queue attribute.
> > > > > > Not sure about other Vendor HW.
> > > > >
> > > > > I'm not sure what do you mean?
> > > >
> > > > What I meant is, What will be the preferred way to configure the feature?
> > > > ie. Is it as ethdev Rx offload or rte_flow?
> > > >
> > > > I think, in order to decide that, we need to know, how most of the
> > > > other HW express this feature?
> > > >
> > >
> > > As I see it both ways could be used,
> > > Maybe even by the same app,
> > >
> > > One flow is to notify the application when it sees the packet
> > > (RX offload) and one is to use as an item to route the packet
> > > when using rte_flow.
> > >
> > > Maybe I'm missing something, on your suggestion how will
> > > application route the packets? Or it will just receive them with flags
> > > on the RX queue?
> >
> > Just receive them with flags on the Rx queue, in order to avoid
> > duplicating features
> > in multiple places.
> I think this is more reasonable and simpler.
> Especially when I read the discussion further in the thread between
> Andrew and Ori.
>
Ajit, I'm sorry but I'm not sure I understand if you like better the suggested approach,
or the RX one?
In any case those are two different cases, one is for the application and one is for
offloaded routing.
Best,
Ori
> >
> > >
> > >
> > >
> > > >
> > > > > This is the idea of the patch, to allow application to route the packet
> > > > > before getting to the Rx,
> > > > > In any case all items support is dependent on HW capabilities.
> > > >
> > > >
> > > >
> > > >
> > >
> > > Best,
> > > Ori
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-09 8:08 ` Andrew Rybchenko
@ 2021-04-11 6:42 ` Ori Kam
2021-04-11 17:30 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-11 6:42 UTC (permalink / raw)
To: Andrew Rybchenko, ajit.khaparde, ferruh.yigit,
NBU-Contact-Thomas Monjalon
Cc: dev, jerinj, olivier.matz, Slava Ovsiienko
Hi Andrew,
PSB,
Best,
Ori
> -----Original Message-----
> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
>
> On 4/8/21 2:39 PM, Ori Kam wrote:
> > Hi Andrew,
> >
> > Thanks for your comments.
> >
> > PSB,
> >
> > Best,
> > Ori
> >
> >> -----Original Message-----
> >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >> Sent: Thursday, April 8, 2021 11:05 AM
> >> Subject: Re: [PATCH] ethdev: add packet integrity checks
> >>
> >> On 4/5/21 9:04 PM, Ori Kam wrote:
> >>> Currently, DPDK application can offload the checksum check,
> >>> and report it in the mbuf.
> >>>
> >>> However, as more and more applications are offloading some or all
> >>> logic and action to the HW, there is a need to check the packet
> >>> integrity so the right decision can be taken.
> >>>
> >>> The application logic can be positive meaning if the packet is
> >>> valid jump / do actions, or negative if packet is not valid
> >>> jump to SW / do actions (like drop) a, and add default flow
> >>> (match all in low priority) that will direct the miss packet
> >>> to the miss path.
> >>>
> >>> Since currenlty rte_flow works in positive way the assumtion is
> >>> that the postive way will be the common way in this case also.
> >>>
> >>> When thinking what is the best API to implement such feature,
> >>> we need to considure the following (in no specific order):
> >>> 1. API breakage.
> >>
> >> First of all I disagree that "API breakage" is put as a top
> >> priority. Design is a top priority, since it is a long term.
> >> aPI breakage is just a short term inconvenient. Of course,
> >> others may disagree, but that's my point of view.
> >>
> > I agree with you, and like I said the order of the list is not
> > according to priorities.
> > I truly believe that what I'm suggesting is the best design.
> >
> >
> >>> 2. Simplicity.
> >>> 3. Performance.
> >>> 4. HW capabilities.
> >>> 5. rte_flow limitation.
> >>> 6. Flexability.
> >>>
> >>> First option: Add integrity flags to each of the items.
> >>> For example add checksum_ok to ipv4 item.
> >>>
> >>> Pros:
> >>> 1. No new rte_flow item.
> >>> 2. Simple in the way that on each item the app can see
> >>> what checks are available.
> >>
> >> 3. Natively supports various tunnels without any extra
> >> changes in a shared item for all layers.
> >>
> > Also in the current suggested approach, we have the level member,
> > So tunnels are supported by default. If someone wants to check also tunnel
> > he just need to add this item again with the right level. (just like with other
> > items)
>
> Thanks, missed it. Is it OK to have just one item with
> level 1 or 2?
>
Yes, of course, if the application just wants to check the sanity of the inner packet he can
just use one integrity item with level of 2.
> What happens if two items with level 0 and level 1 are
> specified, but the packet has no encapsulation?
>
Level zero is the default one (the default just like in RSS case is
PMD dependent but in any case from my knowledge layer 0 if there is no tunnel
will point to the header) and level 1 is the outer most so in this case both of them
are pointing to the same checks.
But if for example we use level = 2 then the checks for level 2 should fail.
Since the packet doesn't hold such info, just like if you check state of l4 and there is
no l4 it should fails.
> >>>
> >>> Cons:
> >>> 1. API breakage.
> >>> 2. increase number of flows, since app can't add global rule and
> >>> must have dedicated flow for each of the flow combinations, for example
> >>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> >>> result in 5 flows.
> >>
> >> Could you expand it? Shouldn't HW offloaded flows with good
> >> checksums go into dedicated queues where as bad packets go
> >> via default path (i.e. no extra rules)?
> >>
> > I'm not sure what do you mean, in a lot of the cases
> > Application will use that to detect valid packets and then
> > forward only valid packets down the flow. (check valid jump
> > --> on next group decap ....)
> > In other cases the app may choose to drop the bad packets or count
> > and then drop, maybe sample them to check this is not part of an attack.
> >
> > This is what is great about this feature we just give the app
> > the ability to offload the sanity checks and be that enables it
> > to offload the traffic itself
>
> Please, when you say "increase number of flows... in 5 flows"
> just try to express in flow rules in both cases. Just for my
> understanding. Since you calculated flows you should have a
> real example.
>
Sure, you are right I should have a better example.
Lets take the example that the application want all valid traffic to
jump to group 2.
The possibilities of valid traffic can be:
Eth / ipv4.
Eth / ipv6
Eth / ipv4 / udp
Eth/ ivp4 / tcp
Eth / ipv6 / udp
Eth / ipv6 / tcp
So if we use the existing items we will get the following 6 flows:
Flow create 0 ingress pattern eth / ipv4 valid = 1 / end action jump group 2
Flow create 0 ingress pattern eth / ipv6 valid = 1 / end action jump group 2
Flow create 0 ingress pattern eth / ipv4 valid = 1 / udp valid = 1/ end action jump group 2
Flow create 0 ingress pattern eth / ipv4 valid = 1 / tcp valid = 1/ end action jump group 2
Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action jump group 2
Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action jump group 2
While if we use the new item approach:
Flow create 0 ingress pattern integrity_check packet_ok =1 / end action jump group 2
If we take the case that we just want valid l4 packets then the flows with existing items will be:
Flow create 0 ingress pattern eth / ipv4 valid = 1 / udp valid = 1/ end action jump group 2
Flow create 0 ingress pattern eth / ipv4 valid = 1 / tcp valid = 1/ end action jump group 2
Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action jump group 2
Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action jump group 2
While with the new item:
Flow create 0 ingress pattern integrity_check l4_ok =1 / end action jump group 2
Is this clearer?
> >>>
> >>> Second option: dedicated item
> >>>
> >>> Pros:
> >>> 1. No API breakage, and there will be no for some time due to having
> >>> extra space. (by using bits)
> >>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> >>> IPv6.
> >>
> >> It depends on how bad (or good0 packets are handled.
> >>
> > Not sure what do you mean,
>
> Again, I hope we understand each other when we talk in terms
> of real example and flow rules.
>
Please see answer above.
I hope it will make things clearer.
> >>> 3. Simplicity application can just look at one place to see all possible
> >>> checks.
> >>
> >> It is a drawback from my point of view, since IPv4 checksum
> >> check is out of IPv4 match item. I.e. analyzing IPv4 you should
> >> take a look at 2 different flow items.
> >>
> > Are you talking from application view point, PMD or HW?
> > From application yes it is true he needs to add one more item
> > to the list, (depending on his flows, since he can have just
> > one flow that checks all packet like I said and move the good
> > ones to a different group and in that group he will match the
> > ipv4 item.
> > For example:
> > ... pattern integrity = valid action jump group 3
> > Group 3 pattern .... ipv4 ... actions .....
> > Group 3 pattern ....ipv6 .... actions ...
> >
> > In any case at worse case it is just adding one more item
> > to the flow.
> >
> > From PMD/HW extra items doesn't mean extra action in HW
> > they can be combined, just like they would have it the
> > condition was in the item itself.
> >
> >>> 4. Allow future support for more tests.
> >>
> >> It is the same for both solution since per-item solution
> >> can keep reserved bits which may be used in the future.
> >>
> > Yes I agree,
> >
> >>>
> >>> Cons:
> >>> 1. New item, that holds number of fields from different items.
> >>
> >> 2. Not that nice for tunnels.
> >
> > Please look at above (not direct ) response since we have the level member
> > tunnels are handled very nicely.
> >
> >>
> >>>
> >>> For starter the following bits are suggested:
> >>> 1. packet_ok - means that all HW checks depending on packet layer have
> >>> passed. This may mean that in some HW such flow should be splited to
> >>> number of flows or fail.
> >>> 2. l2_ok - all check flor layer 2 have passed.
> >>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> >>> l3 layer this check shoudl fail.
> >>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> >>> have l4 layer this check should fail.
> >>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> >>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> >>> be 0 and the l3_ok will be 0.
> >>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> >>> 7. l4_csum_ok - layer 4 checksum is O.K.
> >>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> >>> packet len.
> >>>
> >>> Example of usage:
> >>> 1. check packets from all possible layers for integrity.
> >>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >>>
> >>> 2. Check only packet with layer 4 (UDP / TCP)
> >>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >>>
> >>> Signed-off-by: Ori Kam <orika@nvidia.com>
> >>> ---
> >>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> >>> lib/librte_ethdev/rte_flow.h | 46
> >> ++++++++++++++++++++++++++++++++++++++
> >>> 2 files changed, 65 insertions(+)
> >>>
> >>> diff --git a/doc/guides/prog_guide/rte_flow.rst
> >> b/doc/guides/prog_guide/rte_flow.rst
> >>> index aec2ba1..58b116e 100644
> >>> --- a/doc/guides/prog_guide/rte_flow.rst
> >>> +++ b/doc/guides/prog_guide/rte_flow.rst
> >>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> >>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> >>> - Default ``mask`` matches nothing, for all eCPRI messages.
> >>>
> >>> +Item: ``PACKET_INTEGRITY_CHECKS``
> >>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>> +
> >>> +Matches packet integrity.
> >>> +
> >>> +- ``level``: the encapsulation level that should be checked. level 0 means
> the
> >>> + default PMD mode (Can be inner most / outermost). value of 1 means
> >> outermost
> >>> + and higher value means inner header. See also RSS level.
> >>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
> >> max
> >>> + layer of the packet.
> >>> + layer of the packet.
> >>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> >>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> >>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> >>> +- ``l2_crc_ok``: layer 2 crc check passed.
> >>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> >>> +- ``l4_csum_ok``: layer 4 checksum check passed.
> >>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> >>> +
> >>> Actions
> >>> ~~~~~~~
> >>>
> >>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> >>> index 6cc5713..f6888a1 100644
> >>> --- a/lib/librte_ethdev/rte_flow.h
> >>> +++ b/lib/librte_ethdev/rte_flow.h
> >>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> >>> * See struct rte_flow_item_geneve_opt
> >>> */
> >>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> >>> +
> >>> + /**
> >>> + * [META]
> >>> + *
> >>> + * Matches on packet integrity.
> >>> + *
> >>> + * See struct rte_flow_item_packet_integrity_checks.
> >>> + */
> >>> + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> >>> };
> >>>
> >>> /**
> >>> @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> >>> };
> >>> #endif
> >>>
> >>> +struct rte_flow_item_packet_integrity_checks {
> >>> + uint32_t level;
> >>> + /**< Packet encapsulation level the item should apply to.
> >>> + * @see rte_flow_action_rss
> >>> + */
> >>> +RTE_STD_C11
> >>> + union {
> >>> + struct {
> >>> + uint64_t packet_ok:1;
> >>> + /** The packet is valid after passing all HW checks. */
> >>> + uint64_t l2_ok:1;
> >>> + /**< L2 layer is valid after passing all HW checks. */
> >>> + uint64_t l3_ok:1;
> >>> + /**< L3 layer is valid after passing all HW checks. */
> >>> + uint64_t l4_ok:1;
> >>> + /**< L4 layer is valid after passing all HW checks. */
> >>> + uint64_t l2_crc_ok:1;
> >>> + /**< L2 layer checksum is valid. */
> >>> + uint64_t ipv4_csum_ok:1;
> >>> + /**< L3 layer checksum is valid. */
> >>> + uint64_t l4_csum_ok:1;
> >>> + /**< L4 layer checksum is valid. */
> >>> + uint64_t l3_len_ok:1;
> >>> + /**< The l3 len is smaller than the packet len. */
> >>> + uint64_t reserved:56;
> >>> + };
> >>> + uint64_t value;
> >>> + };
> >>> +};
> >>> +
> >>> +#ifndef __cplusplus
> >>> +static const struct rte_flow_item_sanity_checks
> >>> + rte_flow_item_sanity_checks_mask = {
> >>> + .value = 0,
> >>> +};
> >>> +#endif
> >>> +
> >>> /**
> >>> * Matching pattern item definition.
> >>> *
> >>>
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH] ethdev: add packet integrity checks
2021-04-11 6:42 ` Ori Kam
@ 2021-04-11 17:30 ` Ori Kam
0 siblings, 0 replies; 68+ messages in thread
From: Ori Kam @ 2021-04-11 17:30 UTC (permalink / raw)
To: Ori Kam, Andrew Rybchenko, ajit.khaparde, ferruh.yigit,
NBU-Contact-Thomas Monjalon
Cc: dev, jerinj, olivier.matz, Slava Ovsiienko
Hi,
Small answer update to make the example more clear.
(adding the mask to the item, in previous mail I assumed it is clear that the mask is on only
for the selected bits, but since it may not be clear I'm adding the mask in use)
In any case since RC1 is around the corner, I'm going to send V2 with the testpmd
implementation, which I hope makes things clearer.
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Ori Kam
>
> Hi Andrew,
>
> PSB,
>
> Best,
> Ori
> > -----Original Message-----
> > From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> >
> > On 4/8/21 2:39 PM, Ori Kam wrote:
> > > Hi Andrew,
> > >
> > > Thanks for your comments.
> > >
> > > PSB,
> > >
> > > Best,
> > > Ori
> > >
> > >> -----Original Message-----
> > >> From: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
> > >> Sent: Thursday, April 8, 2021 11:05 AM
> > >> Subject: Re: [PATCH] ethdev: add packet integrity checks
> > >>
> > >> On 4/5/21 9:04 PM, Ori Kam wrote:
> > >>> Currently, DPDK application can offload the checksum check,
> > >>> and report it in the mbuf.
> > >>>
> > >>> However, as more and more applications are offloading some or all
> > >>> logic and action to the HW, there is a need to check the packet
> > >>> integrity so the right decision can be taken.
> > >>>
> > >>> The application logic can be positive meaning if the packet is
> > >>> valid jump / do actions, or negative if packet is not valid
> > >>> jump to SW / do actions (like drop) a, and add default flow
> > >>> (match all in low priority) that will direct the miss packet
> > >>> to the miss path.
> > >>>
> > >>> Since currenlty rte_flow works in positive way the assumtion is
> > >>> that the postive way will be the common way in this case also.
> > >>>
> > >>> When thinking what is the best API to implement such feature,
> > >>> we need to considure the following (in no specific order):
> > >>> 1. API breakage.
> > >>
> > >> First of all I disagree that "API breakage" is put as a top
> > >> priority. Design is a top priority, since it is a long term.
> > >> aPI breakage is just a short term inconvenient. Of course,
> > >> others may disagree, but that's my point of view.
> > >>
> > > I agree with you, and like I said the order of the list is not
> > > according to priorities.
> > > I truly believe that what I'm suggesting is the best design.
> > >
> > >
> > >>> 2. Simplicity.
> > >>> 3. Performance.
> > >>> 4. HW capabilities.
> > >>> 5. rte_flow limitation.
> > >>> 6. Flexability.
> > >>>
> > >>> First option: Add integrity flags to each of the items.
> > >>> For example add checksum_ok to ipv4 item.
> > >>>
> > >>> Pros:
> > >>> 1. No new rte_flow item.
> > >>> 2. Simple in the way that on each item the app can see
> > >>> what checks are available.
> > >>
> > >> 3. Natively supports various tunnels without any extra
> > >> changes in a shared item for all layers.
> > >>
> > > Also in the current suggested approach, we have the level member,
> > > So tunnels are supported by default. If someone wants to check also tunnel
> > > he just need to add this item again with the right level. (just like with other
> > > items)
> >
> > Thanks, missed it. Is it OK to have just one item with
> > level 1 or 2?
> >
> Yes, of course, if the application just wants to check the sanity of the inner
> packet he can
> just use one integrity item with level of 2.
>
>
> > What happens if two items with level 0 and level 1 are
> > specified, but the packet has no encapsulation?
> >
> Level zero is the default one (the default just like in RSS case is
> PMD dependent but in any case from my knowledge layer 0 if there is no
> tunnel
> will point to the header) and level 1 is the outer most so in this case both of
> them
> are pointing to the same checks.
> But if for example we use level = 2 then the checks for level 2 should fail.
> Since the packet doesn't hold such info, just like if you check state of l4 and
> there is
> no l4 it should fails.
>
>
> > >>>
> > >>> Cons:
> > >>> 1. API breakage.
> > >>> 2. increase number of flows, since app can't add global rule and
> > >>> must have dedicated flow for each of the flow combinations, for
> example
> > >>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > >>> result in 5 flows.
> > >>
> > >> Could you expand it? Shouldn't HW offloaded flows with good
> > >> checksums go into dedicated queues where as bad packets go
> > >> via default path (i.e. no extra rules)?
> > >>
> > > I'm not sure what do you mean, in a lot of the cases
> > > Application will use that to detect valid packets and then
> > > forward only valid packets down the flow. (check valid jump
> > > --> on next group decap ....)
> > > In other cases the app may choose to drop the bad packets or count
> > > and then drop, maybe sample them to check this is not part of an attack.
> > >
> > > This is what is great about this feature we just give the app
> > > the ability to offload the sanity checks and be that enables it
> > > to offload the traffic itself
> >
> > Please, when you say "increase number of flows... in 5 flows"
> > just try to express in flow rules in both cases. Just for my
> > understanding. Since you calculated flows you should have a
> > real example.
> >
> Sure, you are right I should have a better example.
> Lets take the example that the application want all valid traffic to
> jump to group 2.
> The possibilities of valid traffic can be:
> Eth / ipv4.
> Eth / ipv6
> Eth / ipv4 / udp
> Eth/ ivp4 / tcp
> Eth / ipv6 / udp
> Eth / ipv6 / tcp
>
> So if we use the existing items we will get the following 6 flows:
> Flow create 0 ingress pattern eth / ipv4 valid = 1 / end action jump group 2
> Flow create 0 ingress pattern eth / ipv6 valid = 1 / end action jump group 2
> Flow create 0 ingress pattern eth / ipv4 valid = 1 / udp valid = 1/ end action
> jump group 2
> Flow create 0 ingress pattern eth / ipv4 valid = 1 / tcp valid = 1/ end action
> jump group 2
> Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action
> jump group 2
> Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action
> jump group 2
>
> While if we use the new item approach:
> Flow create 0 ingress pattern integrity_check packet_ok =1 / end action jump
> group 2
Add the missing mask
Flow create 0 ingress pattern integrity_check spec( packet_ok = 1) mask = (packet_ok = 1) / end action jump
group 2
>
>
> If we take the case that we just want valid l4 packets then the flows with
> existing items will be:
> Flow create 0 ingress pattern eth / ipv4 valid = 1 / udp valid = 1/ end action
> jump group 2
> Flow create 0 ingress pattern eth / ipv4 valid = 1 / tcp valid = 1/ end action
> jump group 2
> Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action
> jump group 2
> Flow create 0 ingress pattern eth / ipv6 valid = 1 / udp valid = 1/ end action
> jump group 2
>
> While with the new item:
> Flow create 0 ingress pattern integrity_check l4_ok =1 / end action jump group
> 2
>
Add the missing mask to the new item:
Flow create 0 ingress pattern integrity_check spec = (l2_ok =1 | l3_ok = 1 | l4_ok = 1) mask = (l2_ok =1 | l3_ok = 1 | l4_ok = 1) / end action jump group
> Is this clearer?
>
>
> > >>>
> > >>> Second option: dedicated item
> > >>>
> > >>> Pros:
> > >>> 1. No API breakage, and there will be no for some time due to having
> > >>> extra space. (by using bits)
> > >>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > >>> IPv6.
> > >>
> > >> It depends on how bad (or good0 packets are handled.
> > >>
> > > Not sure what do you mean,
> >
> > Again, I hope we understand each other when we talk in terms
> > of real example and flow rules.
> >
> Please see answer above.
> I hope it will make things clearer.
>
> > >>> 3. Simplicity application can just look at one place to see all possible
> > >>> checks.
> > >>
> > >> It is a drawback from my point of view, since IPv4 checksum
> > >> check is out of IPv4 match item. I.e. analyzing IPv4 you should
> > >> take a look at 2 different flow items.
> > >>
> > > Are you talking from application view point, PMD or HW?
> > > From application yes it is true he needs to add one more item
> > > to the list, (depending on his flows, since he can have just
> > > one flow that checks all packet like I said and move the good
> > > ones to a different group and in that group he will match the
> > > ipv4 item.
> > > For example:
> > > ... pattern integrity = valid action jump group 3
> > > Group 3 pattern .... ipv4 ... actions .....
> > > Group 3 pattern ....ipv6 .... actions ...
> > >
> > > In any case at worse case it is just adding one more item
> > > to the flow.
> > >
> > > From PMD/HW extra items doesn't mean extra action in HW
> > > they can be combined, just like they would have it the
> > > condition was in the item itself.
> > >
> > >>> 4. Allow future support for more tests.
> > >>
> > >> It is the same for both solution since per-item solution
> > >> can keep reserved bits which may be used in the future.
> > >>
> > > Yes I agree,
> > >
> > >>>
> > >>> Cons:
> > >>> 1. New item, that holds number of fields from different items.
> > >>
> > >> 2. Not that nice for tunnels.
> > >
> > > Please look at above (not direct ) response since we have the level member
> > > tunnels are handled very nicely.
> > >
> > >>
> > >>>
> > >>> For starter the following bits are suggested:
> > >>> 1. packet_ok - means that all HW checks depending on packet layer have
> > >>> passed. This may mean that in some HW such flow should be splited to
> > >>> number of flows or fail.
> > >>> 2. l2_ok - all check flor layer 2 have passed.
> > >>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> > >>> l3 layer this check shoudl fail.
> > >>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> > >>> have l4 layer this check should fail.
> > >>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> > >>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> > >>> be 0 and the l3_ok will be 0.
> > >>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> > >>> 7. l4_csum_ok - layer 4 checksum is O.K.
> > >>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > >>> packet len.
> > >>>
> > >>> Example of usage:
> > >>> 1. check packets from all possible layers for integrity.
> > >>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> > >>>
> > >>> 2. Check only packet with layer 4 (UDP / TCP)
> > >>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> > >>>
> > >>> Signed-off-by: Ori Kam <orika@nvidia.com>
> > >>> ---
> > >>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++++++
> > >>> lib/librte_ethdev/rte_flow.h | 46
> > >> ++++++++++++++++++++++++++++++++++++++
> > >>> 2 files changed, 65 insertions(+)
> > >>>
> > >>> diff --git a/doc/guides/prog_guide/rte_flow.rst
> > >> b/doc/guides/prog_guide/rte_flow.rst
> > >>> index aec2ba1..58b116e 100644
> > >>> --- a/doc/guides/prog_guide/rte_flow.rst
> > >>> +++ b/doc/guides/prog_guide/rte_flow.rst
> > >>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> > >>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > >>> - Default ``mask`` matches nothing, for all eCPRI messages.
> > >>>
> > >>> +Item: ``PACKET_INTEGRITY_CHECKS``
> > >>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > >>> +
> > >>> +Matches packet integrity.
> > >>> +
> > >>> +- ``level``: the encapsulation level that should be checked. level 0 means
> > the
> > >>> + default PMD mode (Can be inner most / outermost). value of 1 means
> > >> outermost
> > >>> + and higher value means inner header. See also RSS level.
> > >>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
> > >> max
> > >>> + layer of the packet.
> > >>> + layer of the packet.
> > >>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > >>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > >>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> > >>> +- ``l2_crc_ok``: layer 2 crc check passed.
> > >>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > >>> +- ``l4_csum_ok``: layer 4 checksum check passed.
> > >>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> > >>> +
> > >>> Actions
> > >>> ~~~~~~~
> > >>>
> > >>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > >>> index 6cc5713..f6888a1 100644
> > >>> --- a/lib/librte_ethdev/rte_flow.h
> > >>> +++ b/lib/librte_ethdev/rte_flow.h
> > >>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > >>> * See struct rte_flow_item_geneve_opt
> > >>> */
> > >>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > >>> +
> > >>> + /**
> > >>> + * [META]
> > >>> + *
> > >>> + * Matches on packet integrity.
> > >>> + *
> > >>> + * See struct rte_flow_item_packet_integrity_checks.
> > >>> + */
> > >>> + RTE_FLOW_ITEM_TYPE_PACKET_INTEGRITY_CHECKS,
> > >>> };
> > >>>
> > >>> /**
> > >>> @@ -1685,6 +1694,43 @@ struct rte_flow_item_geneve_opt {
> > >>> };
> > >>> #endif
> > >>>
> > >>> +struct rte_flow_item_packet_integrity_checks {
> > >>> + uint32_t level;
> > >>> + /**< Packet encapsulation level the item should apply to.
> > >>> + * @see rte_flow_action_rss
> > >>> + */
> > >>> +RTE_STD_C11
> > >>> + union {
> > >>> + struct {
> > >>> + uint64_t packet_ok:1;
> > >>> + /** The packet is valid after passing all HW checks. */
> > >>> + uint64_t l2_ok:1;
> > >>> + /**< L2 layer is valid after passing all HW checks. */
> > >>> + uint64_t l3_ok:1;
> > >>> + /**< L3 layer is valid after passing all HW checks. */
> > >>> + uint64_t l4_ok:1;
> > >>> + /**< L4 layer is valid after passing all HW checks. */
> > >>> + uint64_t l2_crc_ok:1;
> > >>> + /**< L2 layer checksum is valid. */
> > >>> + uint64_t ipv4_csum_ok:1;
> > >>> + /**< L3 layer checksum is valid. */
> > >>> + uint64_t l4_csum_ok:1;
> > >>> + /**< L4 layer checksum is valid. */
> > >>> + uint64_t l3_len_ok:1;
> > >>> + /**< The l3 len is smaller than the packet len. */
> > >>> + uint64_t reserved:56;
> > >>> + };
> > >>> + uint64_t value;
> > >>> + };
> > >>> +};
> > >>> +
> > >>> +#ifndef __cplusplus
> > >>> +static const struct rte_flow_item_sanity_checks
> > >>> + rte_flow_item_sanity_checks_mask = {
> > >>> + .value = 0,
> > >>> +};
> > >>> +#endif
> > >>> +
> > >>> /**
> > >>> * Matching pattern item definition.
> > >>> *
> > >>>
> > >
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 0/2] add packet integrity checks
2021-04-05 18:04 [dpdk-dev] [PATCH] ethdev: add packet integrity checks Ori Kam
2021-04-06 7:39 ` Jerin Jacob
2021-04-08 8:04 ` [dpdk-dev] [PATCH] ethdev: add packet integrity checks Andrew Rybchenko
@ 2021-04-11 17:34 ` Gregory Etelson
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 1/2] ethdev: " Gregory Etelson
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item Gregory Etelson
2 siblings, 2 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-11 17:34 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
olivier.matz, thomas, viacheslavo
V2 adds tespmd patch to clarify proposed API usage.
The patches target upcoming rc1 deadline later this week.
However the API discussion is still open.
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
lib/librte_ethdev/rte_flow.h | 47 ++++++++++++++++++++++++++++++
3 files changed, 105 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 0/2] " Gregory Etelson
@ 2021-04-11 17:34 ` Gregory Etelson
2021-04-12 17:36 ` Ferruh Yigit
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item Gregory Etelson
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-11 17:34 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
olivier.matz, thomas, viacheslavo
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) a, and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currenlty rte_flow works in positive way the assumtion is
that the postive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to considure the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexability.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check flor layer 2 have passed.
3. l3_ok - all check flor layer 2 have passed. If packet doens't have
l3 layer this check shoudl fail.
4. l4_ok - all check flor layer 2 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
be 0 and the l3_ok will be 0.
6. ipv4_csum_ok - IPv4 checksum is O.K.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
packet len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
---
v2: fix compilation error
---
doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
lib/librte_ethdev/rte_flow.h | 47 ++++++++++++++++++++++++++++++
2 files changed, 66 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..87ef591405 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,25 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 3 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
+
Actions
~~~~~~~
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 6cc57136ac..77471af2c4 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ *
+ * See struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+__extension__
+struct rte_flow_item_integrity {
+ uint32_t level;
+ /**< Packet encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ union {
+ struct {
+ uint64_t packet_ok:1;
+ /** The packet is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l2_crc_ok:1;
+ /**< L2 layer checksum is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< L3 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l3_len_ok:1;
+ /**< The l3 len is smaller than the packet len. */
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 0/2] " Gregory Etelson
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 1/2] ethdev: " Gregory Etelson
@ 2021-04-11 17:34 ` Gregory Etelson
2021-04-12 17:49 ` Ferruh Yigit
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-11 17:34 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
olivier.matz, thomas, viacheslavo, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
v2 add testpmd patch
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++++++++++++++++++
1 file changed, 39 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..b5dec34325 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -289,6 +289,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -956,6 +959,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 1/2] ethdev: " Gregory Etelson
@ 2021-04-12 17:36 ` Ferruh Yigit
2021-04-12 19:26 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-12 17:36 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
thomas, viacheslavo
On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> From: Ori Kam <orika@nvidia.com>
>
> Currently, DPDK application can offload the checksum check,
> and report it in the mbuf.
>
> However, as more and more applications are offloading some or all
> logic and action to the HW, there is a need to check the packet
> integrity so the right decision can be taken.
>
> The application logic can be positive meaning if the packet is
> valid jump / do actions, or negative if packet is not valid
> jump to SW / do actions (like drop) a, and add default flow
> (match all in low priority) that will direct the miss packet
> to the miss path.
>
> Since currenlty rte_flow works in positive way the assumtion is
> that the postive way will be the common way in this case also.
>
> When thinking what is the best API to implement such feature,
> we need to considure the following (in no specific order):
> 1. API breakage.
> 2. Simplicity.
> 3. Performance.
> 4. HW capabilities.
> 5. rte_flow limitation.
> 6. Flexability.
>
> First option: Add integrity flags to each of the items.
> For example add checksum_ok to ipv4 item.
>
> Pros:
> 1. No new rte_flow item.
> 2. Simple in the way that on each item the app can see
> what checks are available.
>
> Cons:
> 1. API breakage.
> 2. increase number of flows, since app can't add global rule and
> must have dedicated flow for each of the flow combinations, for example
> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> result in 5 flows.
>
> Second option: dedicated item
>
> Pros:
> 1. No API breakage, and there will be no for some time due to having
> extra space. (by using bits)
> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> IPv6.
> 3. Simplicity application can just look at one place to see all possible
> checks.
> 4. Allow future support for more tests.
>
> Cons:
> 1. New item, that holds number of fields from different items.
>
> For starter the following bits are suggested:
> 1. packet_ok - means that all HW checks depending on packet layer have
> passed. This may mean that in some HW such flow should be splited to
> number of flows or fail.
> 2. l2_ok - all check flor layer 2 have passed.
> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> l3 layer this check shoudl fail.
> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> have l4 layer this check should fail.
> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> be 0 and the l3_ok will be 0.
> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> 7. l4_csum_ok - layer 4 checksum is O.K.
> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> packet len.
>
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
Hi Ori,
Is the intention of the API just filtering, like apply some action to the
packets based on their integration status. Like drop packets their l2_crc
checksum failed? Here configuration is done by existing offload APIs.
Or is the intention to configure the integration check on NIC, like to say
enable layer 2 checks, and do the action based on integration check status.
> Signed-off-by: Ori Kam <orika@nvidia.com>
> ---
> v2: fix compilation error
> ---
> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
> lib/librte_ethdev/rte_flow.h | 47 ++++++++++++++++++++++++++++++
> 2 files changed, 66 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index e1b93ecedf..87ef591405 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +
> +- ``level``: the encapsulation level that should be checked. level 0 means the
> + default PMD mode (Can be inner most / outermost). value of 1 means outermost
> + and higher value means inner header. See also RSS level.
> +- ``packet_ok``: All HW packet integrity checks have passed based on the max
> + layer of the packet.
> + layer of the packet.
> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> +- ``l4_ok``: all layer 3 HW integrity checks passed.
s/layer 3/ layer 4/
> +- ``l2_crc_ok``: layer 2 crc check passed.
> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> +- ``l4_csum_ok``: layer 4 checksum check passed.
> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> +
> Actions
> ~~~~~~~
>
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index 6cc57136ac..77471af2c4 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches on packet integrity.
> + *
> + * See struct rte_flow_item_integrity.
> + */
> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> };
>
> /**
> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +__extension__
> +struct rte_flow_item_integrity {
> + uint32_t level;
> + /**< Packet encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
> + union {
> + struct {
> + uint64_t packet_ok:1;
> + /** The packet is valid after passing all HW checks. */
> + uint64_t l2_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l2_crc_ok:1;
> + /**< L2 layer checksum is valid. */
> + uint64_t ipv4_csum_ok:1;
> + /**< L3 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l3_len_ok:1;
> + /**< The l3 len is smaller than the packet len. */
packet len?
> + uint64_t reserved:56;
> + };
> + uint64_t value;
> + };
> +};
> +
> +#ifndef __cplusplus
> +static const struct rte_flow_item_integrity
> +rte_flow_item_integrity_mask = {
> + .level = 0,
> + .value = 0,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item Gregory Etelson
@ 2021-04-12 17:49 ` Ferruh Yigit
2021-04-13 7:53 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-12 17:49 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
thomas, viacheslavo, Xiaoyun Li
On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> From: Ori Kam <orika@nvidia.com>
>
> The integrity item allows the application to match
> on the integrity of a packet.
>
> use example:
> match that packet integrity checks are ok. The checks depend on
> packet layers. For example ICMP packet will not check L4 level.
> flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
> match that L4 packet is ok - check L2 & L3 & L4 layers:
> flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> ---
> v2 add testpmd patch
> ---
> app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++++++++++++++++++
Hi Gregory, Ori,
Can you add some samples to "testpmd_funcs.rst#flow-rules-management"?
I asked in some other thread but did not get any response, what do you think to
make 'testpmd_funcs.rst' sample update mandatory when testpmd flow added?
> 1 file changed, 39 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index fb7a3a8bd3..b5dec34325 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -289,6 +289,9 @@ enum index {
> ITEM_GENEVE_OPT_TYPE,
> ITEM_GENEVE_OPT_LENGTH,
> ITEM_GENEVE_OPT_DATA,
> + ITEM_INTEGRITY,
> + ITEM_INTEGRITY_LEVEL,
> + ITEM_INTEGRITY_VALUE,
>
> /* Validate/create actions. */
> ACTIONS,
> @@ -956,6 +959,7 @@ static const enum index next_item[] = {
> ITEM_PFCP,
> ITEM_ECPRI,
> ITEM_GENEVE_OPT,
> + ITEM_INTEGRITY,
> END_SET,
> ZERO,
> };
> @@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
> ZERO,
> };
>
> +static const enum index item_integrity[] = {
> + ITEM_INTEGRITY_LEVEL,
> + ITEM_INTEGRITY_VALUE,
> + ZERO,
> +};
> +
> +static const enum index item_integrity_lv[] = {
> + ITEM_INTEGRITY_LEVEL,
> + ITEM_INTEGRITY_VALUE,
> + ITEM_NEXT,
> + ZERO,
> +};
> +
> static const enum index next_action[] = {
> ACTION_END,
> ACTION_VOID,
> @@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
> (sizeof(struct rte_flow_item_geneve_opt),
> ITEM_GENEVE_OPT_DATA_SIZE)),
> },
> + [ITEM_INTEGRITY] = {
> + .name = "integrity",
> + .help = "match packet integrity",
> + .priv = PRIV_ITEM(INTEGRITY,
> + sizeof(struct rte_flow_item_integrity)),
> + .next = NEXT(item_integrity),
> + .call = parse_vc,
> + },
> + [ITEM_INTEGRITY_LEVEL] = {
> + .name = "level",
> + .help = "integrity level",
> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> + item_param),
> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
> + },
> + [ITEM_INTEGRITY_VALUE] = {
> + .name = "value",
> + .help = "integrity value",
> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> + item_param),
> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
> + },
> /* Validate/create actions. */
> [ACTIONS] = {
> .name = "actions",
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-12 17:36 ` Ferruh Yigit
@ 2021-04-12 19:26 ` Ori Kam
2021-04-12 23:31 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-12 19:26 UTC (permalink / raw)
To: Ferruh Yigit, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko
Hi Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>
> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> > From: Ori Kam <orika@nvidia.com>
> >
> > Currently, DPDK application can offload the checksum check,
> > and report it in the mbuf.
> >
> > However, as more and more applications are offloading some or all
> > logic and action to the HW, there is a need to check the packet
> > integrity so the right decision can be taken.
> >
> > The application logic can be positive meaning if the packet is
> > valid jump / do actions, or negative if packet is not valid
> > jump to SW / do actions (like drop) a, and add default flow
> > (match all in low priority) that will direct the miss packet
> > to the miss path.
> >
> > Since currenlty rte_flow works in positive way the assumtion is
> > that the postive way will be the common way in this case also.
> >
> > When thinking what is the best API to implement such feature,
> > we need to considure the following (in no specific order):
> > 1. API breakage.
> > 2. Simplicity.
> > 3. Performance.
> > 4. HW capabilities.
> > 5. rte_flow limitation.
> > 6. Flexability.
> >
> > First option: Add integrity flags to each of the items.
> > For example add checksum_ok to ipv4 item.
> >
> > Pros:
> > 1. No new rte_flow item.
> > 2. Simple in the way that on each item the app can see
> > what checks are available.
> >
> > Cons:
> > 1. API breakage.
> > 2. increase number of flows, since app can't add global rule and
> > must have dedicated flow for each of the flow combinations, for example
> > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > result in 5 flows.
> >
> > Second option: dedicated item
> >
> > Pros:
> > 1. No API breakage, and there will be no for some time due to having
> > extra space. (by using bits)
> > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > IPv6.
> > 3. Simplicity application can just look at one place to see all possible
> > checks.
> > 4. Allow future support for more tests.
> >
> > Cons:
> > 1. New item, that holds number of fields from different items.
> >
> > For starter the following bits are suggested:
> > 1. packet_ok - means that all HW checks depending on packet layer have
> > passed. This may mean that in some HW such flow should be splited to
> > number of flows or fail.
> > 2. l2_ok - all check flor layer 2 have passed.
> > 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> > l3 layer this check shoudl fail.
> > 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> > have l4 layer this check should fail.
> > 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> > be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> > be 0 and the l3_ok will be 0.
> > 6. ipv4_csum_ok - IPv4 checksum is O.K.
> > 7. l4_csum_ok - layer 4 checksum is O.K.
> > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > packet len.
> >
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
>
> Hi Ori,
>
> Is the intention of the API just filtering, like apply some action to the
> packets based on their integration status. Like drop packets their l2_crc
> checksum failed? Here configuration is done by existing offload APIs.
>
> Or is the intention to configure the integration check on NIC, like to say
> enable layer 2 checks, and do the action based on integration check status.
>
If I understand your question the first case is the one that this patch is targeting.
meaning based on those bits route/apply actions to the packet while still in the
HW.
This is not design to enable the queue status bits.
In the use case suggestion by this patch, just like you said the app
can decide to drop the packet before arriving to the queue, application may also
use the mark + queue action to mark to the SW what is the issue with this packet.
I'm not sure I understand your comment about "here configuration is done by existing
offload API" do you mean like the drop / jump to table / any other rte_flow action?
>
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > ---
> > v2: fix compilation error
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
> > lib/librte_ethdev/rte_flow.h | 47 ++++++++++++++++++++++++++++++
> > 2 files changed, 66 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index e1b93ecedf..87ef591405 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``PACKET_INTEGRITY_CHECKS``
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches packet integrity.
> > +
> > +- ``level``: the encapsulation level that should be checked. level 0 means the
> > + default PMD mode (Can be inner most / outermost). value of 1 means
> outermost
> > + and higher value means inner header. See also RSS level.
> > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> max
> > + layer of the packet.
> > + layer of the packet.
> > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > +- ``l4_ok``: all layer 3 HW integrity checks passed.
>
> s/layer 3/ layer 4/
>
Will fix.
> > +- ``l2_crc_ok``: layer 2 crc check passed.
> > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> > +
> > Actions
> > ~~~~~~~
> >
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index 6cc57136ac..77471af2c4 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches on packet integrity.
> > + *
> > + * See struct rte_flow_item_integrity.
> > + */
> > + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> > };
> >
> > /**
> > @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
> > };
> > #endif
> >
> > +__extension__
> > +struct rte_flow_item_integrity {
> > + uint32_t level;
> > + /**< Packet encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
> > + union {
> > + struct {
> > + uint64_t packet_ok:1;
> > + /** The packet is valid after passing all HW checks. */
> > + uint64_t l2_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l2_crc_ok:1;
> > + /**< L2 layer checksum is valid. */
> > + uint64_t ipv4_csum_ok:1;
> > + /**< L3 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l3_len_ok:1;
> > + /**< The l3 len is smaller than the packet len. */
>
> packet len?
>
Do you mean replace the l3_len_ok with packet len?
My only issue is that the check is comparing the l3 len to the packet len.
If you still think it is better to call it packet len, I'm also O.K with it.
> > + uint64_t reserved:56;
> > + };
> > + uint64_t value;
> > + };
> > +};
> > +
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_integrity
> > +rte_flow_item_integrity_mask = {
> > + .level = 0,
> > + .value = 0,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-12 19:26 ` Ori Kam
@ 2021-04-12 23:31 ` Ferruh Yigit
2021-04-13 7:12 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-12 23:31 UTC (permalink / raw)
To: Ori Kam, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko
On 4/12/2021 8:26 PM, Ori Kam wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>
>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
>>> From: Ori Kam <orika@nvidia.com>
>>>
>>> Currently, DPDK application can offload the checksum check,
>>> and report it in the mbuf.
>>>
>>> However, as more and more applications are offloading some or all
>>> logic and action to the HW, there is a need to check the packet
>>> integrity so the right decision can be taken.
>>>
>>> The application logic can be positive meaning if the packet is
>>> valid jump / do actions, or negative if packet is not valid
>>> jump to SW / do actions (like drop) a, and add default flow
>>> (match all in low priority) that will direct the miss packet
>>> to the miss path.
>>>
>>> Since currenlty rte_flow works in positive way the assumtion is
>>> that the postive way will be the common way in this case also.
>>>
>>> When thinking what is the best API to implement such feature,
>>> we need to considure the following (in no specific order):
>>> 1. API breakage.
>>> 2. Simplicity.
>>> 3. Performance.
>>> 4. HW capabilities.
>>> 5. rte_flow limitation.
>>> 6. Flexability.
>>>
>>> First option: Add integrity flags to each of the items.
>>> For example add checksum_ok to ipv4 item.
>>>
>>> Pros:
>>> 1. No new rte_flow item.
>>> 2. Simple in the way that on each item the app can see
>>> what checks are available.
>>>
>>> Cons:
>>> 1. API breakage.
>>> 2. increase number of flows, since app can't add global rule and
>>> must have dedicated flow for each of the flow combinations, for example
>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
>>> result in 5 flows.
>>>
>>> Second option: dedicated item
>>>
>>> Pros:
>>> 1. No API breakage, and there will be no for some time due to having
>>> extra space. (by using bits)
>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>> IPv6.
>>> 3. Simplicity application can just look at one place to see all possible
>>> checks.
>>> 4. Allow future support for more tests.
>>>
>>> Cons:
>>> 1. New item, that holds number of fields from different items.
>>>
>>> For starter the following bits are suggested:
>>> 1. packet_ok - means that all HW checks depending on packet layer have
>>> passed. This may mean that in some HW such flow should be splited to
>>> number of flows or fail.
>>> 2. l2_ok - all check flor layer 2 have passed.
>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
>>> l3 layer this check shoudl fail.
>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
>>> have l4 layer this check should fail.
>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
>>> be 0 and the l3_ok will be 0.
>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
>>> 7. l4_csum_ok - layer 4 checksum is O.K.
>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>> packet len.
>>>
>>> Example of usage:
>>> 1. check packets from all possible layers for integrity.
>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>
>>> 2. Check only packet with layer 4 (UDP / TCP)
>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>>
>>
>> Hi Ori,
>>
>> Is the intention of the API just filtering, like apply some action to the
>> packets based on their integration status. Like drop packets their l2_crc
>> checksum failed? Here configuration is done by existing offload APIs.
>>
>> Or is the intention to configure the integration check on NIC, like to say
>> enable layer 2 checks, and do the action based on integration check status.
>>
> If I understand your question the first case is the one that this patch is targeting.
> meaning based on those bits route/apply actions to the packet while still in the
> HW.
>
> This is not design to enable the queue status bits.
> In the use case suggestion by this patch, just like you said the app
> can decide to drop the packet before arriving to the queue, application may also
> use the mark + queue action to mark to the SW what is the issue with this packet.
>
> I'm not sure I understand your comment about "here configuration is done by existing
> offload API" do you mean like the drop / jump to table / any other rte_flow action?
>
>
I am asking because difference between device configuration and packet filtering
seems getting more blurred in the flow API.
Currently L4 checksum offload is requested by application via setting
'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the way to
configure HW.
Is the intention of this patch doing packet filtering after device configured
with above offload API?
Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is set
in the rule, will it enable L4 checks first and do the filtering later?
If not what is the expected behavior when integration checks are not enabled
when the rule is created?
>>
>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>> ---
>>> v2: fix compilation error
>>> ---
>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
>>> lib/librte_ethdev/rte_flow.h | 47 ++++++++++++++++++++++++++++++
>>> 2 files changed, 66 insertions(+)
>>>
>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>> b/doc/guides/prog_guide/rte_flow.rst
>>> index e1b93ecedf..87ef591405 100644
>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>> - Default ``mask`` matches nothing, for all eCPRI messages.
>>>
>>> +Item: ``PACKET_INTEGRITY_CHECKS``
>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>> +
>>> +Matches packet integrity.
>>> +
>>> +- ``level``: the encapsulation level that should be checked. level 0 means the
>>> + default PMD mode (Can be inner most / outermost). value of 1 means
>> outermost
>>> + and higher value means inner header. See also RSS level.
>>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
>> max
>>> + layer of the packet.
>>> + layer of the packet.
>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
>>
>> s/layer 3/ layer 4/
>>
> Will fix.
>
>>> +- ``l2_crc_ok``: layer 2 crc check passed.
>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
>>> +
>>> Actions
>>> ~~~~~~~
>>>
>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
>>> index 6cc57136ac..77471af2c4 100644
>>> --- a/lib/librte_ethdev/rte_flow.h
>>> +++ b/lib/librte_ethdev/rte_flow.h
>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
>>> * See struct rte_flow_item_geneve_opt
>>> */
>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
>>> +
>>> + /**
>>> + * [META]
>>> + *
>>> + * Matches on packet integrity.
>>> + *
>>> + * See struct rte_flow_item_integrity.
>>> + */
>>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
>>> };
>>>
>>> /**
>>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
>>> };
>>> #endif
>>>
>>> +__extension__
>>> +struct rte_flow_item_integrity {
>>> + uint32_t level;
>>> + /**< Packet encapsulation level the item should apply to.
>>> + * @see rte_flow_action_rss
>>> + */
>>> + union {
>>> + struct {
>>> + uint64_t packet_ok:1;
>>> + /** The packet is valid after passing all HW checks. */
>>> + uint64_t l2_ok:1;
>>> + /**< L2 layer is valid after passing all HW checks. */
>>> + uint64_t l3_ok:1;
>>> + /**< L3 layer is valid after passing all HW checks. */
>>> + uint64_t l4_ok:1;
>>> + /**< L4 layer is valid after passing all HW checks. */
>>> + uint64_t l2_crc_ok:1;
>>> + /**< L2 layer checksum is valid. */
>>> + uint64_t ipv4_csum_ok:1;
>>> + /**< L3 layer checksum is valid. */
>>> + uint64_t l4_csum_ok:1;
>>> + /**< L4 layer checksum is valid. */
>>> + uint64_t l3_len_ok:1;
>>> + /**< The l3 len is smaller than the packet len. */
>>
>> packet len?
>>
> Do you mean replace the l3_len_ok with packet len?
no, I was trying to ask what is "packet len" here? frame length, or mbuf buffer
length, or something else?
> My only issue is that the check is comparing the l3 len to the packet len.
>
> If you still think it is better to call it packet len, I'm also O.K with it.
>
>>> + uint64_t reserved:56;
>>> + };
>>> + uint64_t value;
>>> + };
>>> +};
>>> +
>>> +#ifndef __cplusplus
>>> +static const struct rte_flow_item_integrity
>>> +rte_flow_item_integrity_mask = {
>>> + .level = 0,
>>> + .value = 0,
>>> +};
>>> +#endif
>>> +
>>> /**
>>> * Matching pattern item definition.
>>> *
>>>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-12 23:31 ` Ferruh Yigit
@ 2021-04-13 7:12 ` Ori Kam
2021-04-13 8:03 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-13 7:12 UTC (permalink / raw)
To: Ferruh Yigit, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko
Hi Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>
> On 4/12/2021 8:26 PM, Ori Kam wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
> >>
> >> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> >>> From: Ori Kam <orika@nvidia.com>
> >>>
> >>> Currently, DPDK application can offload the checksum check,
> >>> and report it in the mbuf.
> >>>
> >>> However, as more and more applications are offloading some or all
> >>> logic and action to the HW, there is a need to check the packet
> >>> integrity so the right decision can be taken.
> >>>
> >>> The application logic can be positive meaning if the packet is
> >>> valid jump / do actions, or negative if packet is not valid
> >>> jump to SW / do actions (like drop) a, and add default flow
> >>> (match all in low priority) that will direct the miss packet
> >>> to the miss path.
> >>>
> >>> Since currenlty rte_flow works in positive way the assumtion is
> >>> that the postive way will be the common way in this case also.
> >>>
> >>> When thinking what is the best API to implement such feature,
> >>> we need to considure the following (in no specific order):
> >>> 1. API breakage.
> >>> 2. Simplicity.
> >>> 3. Performance.
> >>> 4. HW capabilities.
> >>> 5. rte_flow limitation.
> >>> 6. Flexability.
> >>>
> >>> First option: Add integrity flags to each of the items.
> >>> For example add checksum_ok to ipv4 item.
> >>>
> >>> Pros:
> >>> 1. No new rte_flow item.
> >>> 2. Simple in the way that on each item the app can see
> >>> what checks are available.
> >>>
> >>> Cons:
> >>> 1. API breakage.
> >>> 2. increase number of flows, since app can't add global rule and
> >>> must have dedicated flow for each of the flow combinations, for
> example
> >>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> >>> result in 5 flows.
> >>>
> >>> Second option: dedicated item
> >>>
> >>> Pros:
> >>> 1. No API breakage, and there will be no for some time due to having
> >>> extra space. (by using bits)
> >>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> >>> IPv6.
> >>> 3. Simplicity application can just look at one place to see all possible
> >>> checks.
> >>> 4. Allow future support for more tests.
> >>>
> >>> Cons:
> >>> 1. New item, that holds number of fields from different items.
> >>>
> >>> For starter the following bits are suggested:
> >>> 1. packet_ok - means that all HW checks depending on packet layer have
> >>> passed. This may mean that in some HW such flow should be splited to
> >>> number of flows or fail.
> >>> 2. l2_ok - all check flor layer 2 have passed.
> >>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> >>> l3 layer this check shoudl fail.
> >>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> >>> have l4 layer this check should fail.
> >>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> >>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> >>> be 0 and the l3_ok will be 0.
> >>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> >>> 7. l4_csum_ok - layer 4 checksum is O.K.
> >>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> >>> packet len.
> >>>
> >>> Example of usage:
> >>> 1. check packets from all possible layers for integrity.
> >>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >>>
> >>> 2. Check only packet with layer 4 (UDP / TCP)
> >>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >>>
> >>
> >> Hi Ori,
> >>
> >> Is the intention of the API just filtering, like apply some action to the
> >> packets based on their integration status. Like drop packets their l2_crc
> >> checksum failed? Here configuration is done by existing offload APIs.
> >>
> >> Or is the intention to configure the integration check on NIC, like to say
> >> enable layer 2 checks, and do the action based on integration check status.
> >>
> > If I understand your question the first case is the one that this patch is
> targeting.
> > meaning based on those bits route/apply actions to the packet while still in
> the
> > HW.
> >
> > This is not design to enable the queue status bits.
> > In the use case suggestion by this patch, just like you said the app
> > can decide to drop the packet before arriving to the queue, application may
> also
> > use the mark + queue action to mark to the SW what is the issue with this
> packet.
> >
> > I'm not sure I understand your comment about "here configuration is done by
> existing
> > offload API" do you mean like the drop / jump to table / any other rte_flow
> action?
> >
> >
>
> I am asking because difference between device configuration and packet
> filtering
> seems getting more blurred in the flow API.
>
> Currently L4 checksum offload is requested by application via setting
> 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the way
> to
> configure HW.
>
> Is the intention of this patch doing packet filtering after device configured
> with above offload API?
>
> Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is set
> in the rule, will it enable L4 checks first and do the filtering later?
>
> If not what is the expected behavior when integration checks are not enabled
> when the rule is created?
>
Let me try to explain it in a different way:
When application enables 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags
It only means that the HW will report the checksum in the mbuf.
Lets call this mode RX queue offload.
Now I'm introducing rte_flow offload,
This means that the application can create rte_flow that matches those
bits and based on that take different actions that are defined by the rte_flow.
Example for such a flow
Flow create 0 ingress pattern integrity spec packet_ok = 0 mask packet_ok = 1 / end actions count / drop / end
Offloading such a flow will result that all invalid packets will be counted and dropped, in the HW
so even if the RX queue offload was enabled, no invalid packets will arrive to the application.
In other case lets assume that the application wants all valid packets to jump to the next group,
and all the reset of the packets will go to SW. (we can assume that later in the pipeline also valid packets
will arrive to the application)
Flow create 0 ingress pattern integrity spec packet_ok = 1 mask packet_ok = 1 / end actions jump group 1 / end
Flow create 0 priority 1 pattern eth / end actions rss / end
In this case if the application enabled the RX offload then if the application will receive invalid packet
also the flag in the mbuf will be set.
As you can see those two offloads mode are complementary to each other and one doesn't force the other one in any
way.
I hope this is clearer.
> >>
> >>> Signed-off-by: Ori Kam <orika@nvidia.com>
> >>> ---
> >>> v2: fix compilation error
> >>> ---
> >>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
> >>> lib/librte_ethdev/rte_flow.h | 47
> ++++++++++++++++++++++++++++++
> >>> 2 files changed, 66 insertions(+)
> >>>
> >>> diff --git a/doc/guides/prog_guide/rte_flow.rst
> >> b/doc/guides/prog_guide/rte_flow.rst
> >>> index e1b93ecedf..87ef591405 100644
> >>> --- a/doc/guides/prog_guide/rte_flow.rst
> >>> +++ b/doc/guides/prog_guide/rte_flow.rst
> >>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> >>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> >>> - Default ``mask`` matches nothing, for all eCPRI messages.
> >>>
> >>> +Item: ``PACKET_INTEGRITY_CHECKS``
> >>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>> +
> >>> +Matches packet integrity.
> >>> +
> >>> +- ``level``: the encapsulation level that should be checked. level 0 means
> the
> >>> + default PMD mode (Can be inner most / outermost). value of 1 means
> >> outermost
> >>> + and higher value means inner header. See also RSS level.
> >>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
> >> max
> >>> + layer of the packet.
> >>> + layer of the packet.
> >>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> >>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> >>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> >>
> >> s/layer 3/ layer 4/
> >>
> > Will fix.
> >
> >>> +- ``l2_crc_ok``: layer 2 crc check passed.
> >>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> >>> +- ``l4_csum_ok``: layer 4 checksum check passed.
> >>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> >>> +
> >>> Actions
> >>> ~~~~~~~
> >>>
> >>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> >>> index 6cc57136ac..77471af2c4 100644
> >>> --- a/lib/librte_ethdev/rte_flow.h
> >>> +++ b/lib/librte_ethdev/rte_flow.h
> >>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> >>> * See struct rte_flow_item_geneve_opt
> >>> */
> >>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> >>> +
> >>> + /**
> >>> + * [META]
> >>> + *
> >>> + * Matches on packet integrity.
> >>> + *
> >>> + * See struct rte_flow_item_integrity.
> >>> + */
> >>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> >>> };
> >>>
> >>> /**
> >>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
> >>> };
> >>> #endif
> >>>
> >>> +__extension__
> >>> +struct rte_flow_item_integrity {
> >>> + uint32_t level;
> >>> + /**< Packet encapsulation level the item should apply to.
> >>> + * @see rte_flow_action_rss
> >>> + */
> >>> + union {
> >>> + struct {
> >>> + uint64_t packet_ok:1;
> >>> + /** The packet is valid after passing all HW checks. */
> >>> + uint64_t l2_ok:1;
> >>> + /**< L2 layer is valid after passing all HW checks. */
> >>> + uint64_t l3_ok:1;
> >>> + /**< L3 layer is valid after passing all HW checks. */
> >>> + uint64_t l4_ok:1;
> >>> + /**< L4 layer is valid after passing all HW checks. */
> >>> + uint64_t l2_crc_ok:1;
> >>> + /**< L2 layer checksum is valid. */
> >>> + uint64_t ipv4_csum_ok:1;
> >>> + /**< L3 layer checksum is valid. */
> >>> + uint64_t l4_csum_ok:1;
> >>> + /**< L4 layer checksum is valid. */
> >>> + uint64_t l3_len_ok:1;
> >>> + /**< The l3 len is smaller than the packet len. */
> >>
> >> packet len?
> >>
> > Do you mean replace the l3_len_ok with packet len?
>
> no, I was trying to ask what is "packet len" here? frame length, or mbuf buffer
> length, or something else?
>
Frame length.
> > My only issue is that the check is comparing the l3 len to the packet len.
> >
> > If you still think it is better to call it packet len, I'm also O.K with it.
> >
> >>> + uint64_t reserved:56;
> >>> + };
> >>> + uint64_t value;
> >>> + };
> >>> +};
> >>> +
> >>> +#ifndef __cplusplus
> >>> +static const struct rte_flow_item_integrity
> >>> +rte_flow_item_integrity_mask = {
> >>> + .level = 0,
> >>> + .value = 0,
> >>> +};
> >>> +#endif
> >>> +
> >>> /**
> >>> * Matching pattern item definition.
> >>> *
> >>>
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item
2021-04-12 17:49 ` Ferruh Yigit
@ 2021-04-13 7:53 ` Ori Kam
2021-04-13 8:14 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-13 7:53 UTC (permalink / raw)
To: Ferruh Yigit, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Xiaoyun Li
Hi Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
>
> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> > From: Ori Kam <orika@nvidia.com>
> >
> > The integrity item allows the application to match
> > on the integrity of a packet.
> >
> > use example:
> > match that packet integrity checks are ok. The checks depend on
> > packet layers. For example ICMP packet will not check L4 level.
> > flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
> > match that L4 packet is ok - check L2 & L3 & L4 layers:
> > flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
> >
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> > ---
> > v2 add testpmd patch
> > ---
> > app/test-pmd/cmdline_flow.c | 39
> +++++++++++++++++++++++++++++++++++++
>
> Hi Gregory, Ori,
>
> Can you add some samples to "testpmd_funcs.rst#flow-rules-management"?
>
> I asked in some other thread but did not get any response, what do you think to
> make 'testpmd_funcs.rst' sample update mandatory when testpmd flow added?
>
I fully agree that each new function should be mandatory,
The question is do we want that each new item / action (they use existing function)
I think it is a bit of overhead but I don't have strong opinion.
>
> > 1 file changed, 39 insertions(+)
> >
> > diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> > index fb7a3a8bd3..b5dec34325 100644
> > --- a/app/test-pmd/cmdline_flow.c
> > +++ b/app/test-pmd/cmdline_flow.c
> > @@ -289,6 +289,9 @@ enum index {
> > ITEM_GENEVE_OPT_TYPE,
> > ITEM_GENEVE_OPT_LENGTH,
> > ITEM_GENEVE_OPT_DATA,
> > + ITEM_INTEGRITY,
> > + ITEM_INTEGRITY_LEVEL,
> > + ITEM_INTEGRITY_VALUE,
> >
> > /* Validate/create actions. */
> > ACTIONS,
> > @@ -956,6 +959,7 @@ static const enum index next_item[] = {
> > ITEM_PFCP,
> > ITEM_ECPRI,
> > ITEM_GENEVE_OPT,
> > + ITEM_INTEGRITY,
> > END_SET,
> > ZERO,
> > };
> > @@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
> > ZERO,
> > };
> >
> > +static const enum index item_integrity[] = {
> > + ITEM_INTEGRITY_LEVEL,
> > + ITEM_INTEGRITY_VALUE,
> > + ZERO,
> > +};
> > +
> > +static const enum index item_integrity_lv[] = {
> > + ITEM_INTEGRITY_LEVEL,
> > + ITEM_INTEGRITY_VALUE,
> > + ITEM_NEXT,
> > + ZERO,
> > +};
> > +
> > static const enum index next_action[] = {
> > ACTION_END,
> > ACTION_VOID,
> > @@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
> > (sizeof(struct rte_flow_item_geneve_opt),
> > ITEM_GENEVE_OPT_DATA_SIZE)),
> > },
> > + [ITEM_INTEGRITY] = {
> > + .name = "integrity",
> > + .help = "match packet integrity",
> > + .priv = PRIV_ITEM(INTEGRITY,
> > + sizeof(struct rte_flow_item_integrity)),
> > + .next = NEXT(item_integrity),
> > + .call = parse_vc,
> > + },
> > + [ITEM_INTEGRITY_LEVEL] = {
> > + .name = "level",
> > + .help = "integrity level",
> > + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> > + item_param),
> > + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity,
> level)),
> > + },
> > + [ITEM_INTEGRITY_VALUE] = {
> > + .name = "value",
> > + .help = "integrity value",
> > + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> > + item_param),
> > + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity,
> value)),
> > + },
> > /* Validate/create actions. */
> > [ACTIONS] = {
> > .name = "actions",
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-13 7:12 ` Ori Kam
@ 2021-04-13 8:03 ` Ferruh Yigit
2021-04-13 8:18 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-13 8:03 UTC (permalink / raw)
To: Ori Kam, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Qi Zhang
On 4/13/2021 8:12 AM, Ori Kam wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>
>> On 4/12/2021 8:26 PM, Ori Kam wrote:
>>> Hi Ferruh,
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>>>
>>>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
>>>>> From: Ori Kam <orika@nvidia.com>
>>>>>
>>>>> Currently, DPDK application can offload the checksum check,
>>>>> and report it in the mbuf.
>>>>>
>>>>> However, as more and more applications are offloading some or all
>>>>> logic and action to the HW, there is a need to check the packet
>>>>> integrity so the right decision can be taken.
>>>>>
>>>>> The application logic can be positive meaning if the packet is
>>>>> valid jump / do actions, or negative if packet is not valid
>>>>> jump to SW / do actions (like drop) a, and add default flow
>>>>> (match all in low priority) that will direct the miss packet
>>>>> to the miss path.
>>>>>
>>>>> Since currenlty rte_flow works in positive way the assumtion is
>>>>> that the postive way will be the common way in this case also.
>>>>>
>>>>> When thinking what is the best API to implement such feature,
>>>>> we need to considure the following (in no specific order):
>>>>> 1. API breakage.
>>>>> 2. Simplicity.
>>>>> 3. Performance.
>>>>> 4. HW capabilities.
>>>>> 5. rte_flow limitation.
>>>>> 6. Flexability.
>>>>>
>>>>> First option: Add integrity flags to each of the items.
>>>>> For example add checksum_ok to ipv4 item.
>>>>>
>>>>> Pros:
>>>>> 1. No new rte_flow item.
>>>>> 2. Simple in the way that on each item the app can see
>>>>> what checks are available.
>>>>>
>>>>> Cons:
>>>>> 1. API breakage.
>>>>> 2. increase number of flows, since app can't add global rule and
>>>>> must have dedicated flow for each of the flow combinations, for
>> example
>>>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
>>>>> result in 5 flows.
>>>>>
>>>>> Second option: dedicated item
>>>>>
>>>>> Pros:
>>>>> 1. No API breakage, and there will be no for some time due to having
>>>>> extra space. (by using bits)
>>>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>>>> IPv6.
>>>>> 3. Simplicity application can just look at one place to see all possible
>>>>> checks.
>>>>> 4. Allow future support for more tests.
>>>>>
>>>>> Cons:
>>>>> 1. New item, that holds number of fields from different items.
>>>>>
>>>>> For starter the following bits are suggested:
>>>>> 1. packet_ok - means that all HW checks depending on packet layer have
>>>>> passed. This may mean that in some HW such flow should be splited to
>>>>> number of flows or fail.
>>>>> 2. l2_ok - all check flor layer 2 have passed.
>>>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
>>>>> l3 layer this check shoudl fail.
>>>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
>>>>> have l4 layer this check should fail.
>>>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
>>>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
>>>>> be 0 and the l3_ok will be 0.
>>>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
>>>>> 7. l4_csum_ok - layer 4 checksum is O.K.
>>>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>>>> packet len.
>>>>>
>>>>> Example of usage:
>>>>> 1. check packets from all possible layers for integrity.
>>>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>>>
>>>>> 2. Check only packet with layer 4 (UDP / TCP)
>>>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>>>>
>>>>
>>>> Hi Ori,
>>>>
>>>> Is the intention of the API just filtering, like apply some action to the
>>>> packets based on their integration status. Like drop packets their l2_crc
>>>> checksum failed? Here configuration is done by existing offload APIs.
>>>>
>>>> Or is the intention to configure the integration check on NIC, like to say
>>>> enable layer 2 checks, and do the action based on integration check status.
>>>>
>>> If I understand your question the first case is the one that this patch is
>> targeting.
>>> meaning based on those bits route/apply actions to the packet while still in
>> the
>>> HW.
>>>
>>> This is not design to enable the queue status bits.
>>> In the use case suggestion by this patch, just like you said the app
>>> can decide to drop the packet before arriving to the queue, application may
>> also
>>> use the mark + queue action to mark to the SW what is the issue with this
>> packet.
>>>
>>> I'm not sure I understand your comment about "here configuration is done by
>> existing
>>> offload API" do you mean like the drop / jump to table / any other rte_flow
>> action?
>>>
>>>
>>
>> I am asking because difference between device configuration and packet
>> filtering
>> seems getting more blurred in the flow API.
>>
>> Currently L4 checksum offload is requested by application via setting
>> 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the way
>> to
>> configure HW.
>>
>> Is the intention of this patch doing packet filtering after device configured
>> with above offload API?
>>
>> Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is set
>> in the rule, will it enable L4 checks first and do the filtering later?
>>
>> If not what is the expected behavior when integration checks are not enabled
>> when the rule is created?
>>
>
> Let me try to explain it in a different way:
> When application enables 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags
> It only means that the HW will report the checksum in the mbuf.
It is not only for reporting in mbuf, it also configures the HW to enable the
relevant checks. This is for device configuration.
Are these checks always enabled in mlx devices?
> Lets call this mode RX queue offload.
>
> Now I'm introducing rte_flow offload,
> This means that the application can create rte_flow that matches those
> bits and based on that take different actions that are defined by the rte_flow.
> Example for such a flow
> Flow create 0 ingress pattern integrity spec packet_ok = 0 mask packet_ok = 1 / end actions count / drop / end
>
> Offloading such a flow will result that all invalid packets will be counted and dropped, in the HW
> so even if the RX queue offload was enabled, no invalid packets will arrive to the application.
>
> In other case lets assume that the application wants all valid packets to jump to the next group,
> and all the reset of the packets will go to SW. (we can assume that later in the pipeline also valid packets
> will arrive to the application)
> Flow create 0 ingress pattern integrity spec packet_ok = 1 mask packet_ok = 1 / end actions jump group 1 / end
> Flow create 0 priority 1 pattern eth / end actions rss / end
>
> In this case if the application enabled the RX offload then if the application will receive invalid packet
> also the flag in the mbuf will be set.
>
If application not enabled the Rx offload that means HW checks are not enabled,
so HW can't know if a packet is OK or not, right.
In that case, for above rule, I expect none of the packets to match, so none
will jump to next group. Are we in the same page here?
Or do you expect above rule configure the HW to enable the relevant HW checks first?
> As you can see those two offloads mode are complementary to each other and one doesn't force the other one in any
> way.
>
> I hope this is clearer.
>
>
>>>>
>>>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>>>> ---
>>>>> v2: fix compilation error
>>>>> ---
>>>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
>>>>> lib/librte_ethdev/rte_flow.h | 47
>> ++++++++++++++++++++++++++++++
>>>>> 2 files changed, 66 insertions(+)
>>>>>
>>>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>>>> b/doc/guides/prog_guide/rte_flow.rst
>>>>> index e1b93ecedf..87ef591405 100644
>>>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>>>> - Default ``mask`` matches nothing, for all eCPRI messages.
>>>>>
>>>>> +Item: ``PACKET_INTEGRITY_CHECKS``
>>>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>>> +
>>>>> +Matches packet integrity.
>>>>> +
>>>>> +- ``level``: the encapsulation level that should be checked. level 0 means
>> the
>>>>> + default PMD mode (Can be inner most / outermost). value of 1 means
>>>> outermost
>>>>> + and higher value means inner header. See also RSS level.
>>>>> +- ``packet_ok``: All HW packet integrity checks have passed based on the
>>>> max
>>>>> + layer of the packet.
>>>>> + layer of the packet.
>>>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
>>>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
>>>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
>>>>
>>>> s/layer 3/ layer 4/
>>>>
>>> Will fix.
>>>
>>>>> +- ``l2_crc_ok``: layer 2 crc check passed.
>>>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
>>>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
>>>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
>>>>> +
>>>>> Actions
>>>>> ~~~~~~~
>>>>>
>>>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
>>>>> index 6cc57136ac..77471af2c4 100644
>>>>> --- a/lib/librte_ethdev/rte_flow.h
>>>>> +++ b/lib/librte_ethdev/rte_flow.h
>>>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
>>>>> * See struct rte_flow_item_geneve_opt
>>>>> */
>>>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
>>>>> +
>>>>> + /**
>>>>> + * [META]
>>>>> + *
>>>>> + * Matches on packet integrity.
>>>>> + *
>>>>> + * See struct rte_flow_item_integrity.
>>>>> + */
>>>>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
>>>>> };
>>>>>
>>>>> /**
>>>>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
>>>>> };
>>>>> #endif
>>>>>
>>>>> +__extension__
>>>>> +struct rte_flow_item_integrity {
>>>>> + uint32_t level;
>>>>> + /**< Packet encapsulation level the item should apply to.
>>>>> + * @see rte_flow_action_rss
>>>>> + */
>>>>> + union {
>>>>> + struct {
>>>>> + uint64_t packet_ok:1;
>>>>> + /** The packet is valid after passing all HW checks. */
>>>>> + uint64_t l2_ok:1;
>>>>> + /**< L2 layer is valid after passing all HW checks. */
>>>>> + uint64_t l3_ok:1;
>>>>> + /**< L3 layer is valid after passing all HW checks. */
>>>>> + uint64_t l4_ok:1;
>>>>> + /**< L4 layer is valid after passing all HW checks. */
>>>>> + uint64_t l2_crc_ok:1;
>>>>> + /**< L2 layer checksum is valid. */
>>>>> + uint64_t ipv4_csum_ok:1;
>>>>> + /**< L3 layer checksum is valid. */
>>>>> + uint64_t l4_csum_ok:1;
>>>>> + /**< L4 layer checksum is valid. */
>>>>> + uint64_t l3_len_ok:1;
>>>>> + /**< The l3 len is smaller than the packet len. */
>>>>
>>>> packet len?
>>>>
>>> Do you mean replace the l3_len_ok with packet len?
>>
>> no, I was trying to ask what is "packet len" here? frame length, or mbuf buffer
>> length, or something else?
>>
> Frame length.
>
>>> My only issue is that the check is comparing the l3 len to the packet len.
>>>
>>> If you still think it is better to call it packet len, I'm also O.K with it.
>>>
>>>>> + uint64_t reserved:56;
>>>>> + };
>>>>> + uint64_t value;
>>>>> + };
>>>>> +};
>>>>> +
>>>>> +#ifndef __cplusplus
>>>>> +static const struct rte_flow_item_integrity
>>>>> +rte_flow_item_integrity_mask = {
>>>>> + .level = 0,
>>>>> + .value = 0,
>>>>> +};
>>>>> +#endif
>>>>> +
>>>>> /**
>>>>> * Matching pattern item definition.
>>>>> *
>>>>>
>>>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item
2021-04-13 7:53 ` Ori Kam
@ 2021-04-13 8:14 ` Ferruh Yigit
2021-04-13 11:36 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-13 8:14 UTC (permalink / raw)
To: Ori Kam, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Xiaoyun Li
On 4/13/2021 8:53 AM, Ori Kam wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>
>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
>>> From: Ori Kam <orika@nvidia.com>
>>>
>>> The integrity item allows the application to match
>>> on the integrity of a packet.
>>>
>>> use example:
>>> match that packet integrity checks are ok. The checks depend on
>>> packet layers. For example ICMP packet will not check L4 level.
>>> flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
>>> match that L4 packet is ok - check L2 & L3 & L4 layers:
>>> flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
>>>
>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
>>> ---
>>> v2 add testpmd patch
>>> ---
>>> app/test-pmd/cmdline_flow.c | 39
>> +++++++++++++++++++++++++++++++++++++
>>
>> Hi Gregory, Ori,
>>
>> Can you add some samples to "testpmd_funcs.rst#flow-rules-management"?
>>
>> I asked in some other thread but did not get any response, what do you think to
>> make 'testpmd_funcs.rst' sample update mandatory when testpmd flow added?
>>
> I fully agree that each new function should be mandatory,
What is new function here, new flow API? That should go to flow API
documentation, 'rte_flow.rst'.
> The question is do we want that each new item / action (they use existing function)
> I think it is a bit of overhead but I don't have strong opinion.
>
Since the documentation is for the testpmd usage sample, I was thinking to add
sample for each new item & action indeed.
Same of the flow rules not widely used, and it is not always clear how to use
them, that is why I believe documenting samples can help.
>>
>>> 1 file changed, 39 insertions(+)
>>>
>>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
>>> index fb7a3a8bd3..b5dec34325 100644
>>> --- a/app/test-pmd/cmdline_flow.c
>>> +++ b/app/test-pmd/cmdline_flow.c
>>> @@ -289,6 +289,9 @@ enum index {
>>> ITEM_GENEVE_OPT_TYPE,
>>> ITEM_GENEVE_OPT_LENGTH,
>>> ITEM_GENEVE_OPT_DATA,
>>> + ITEM_INTEGRITY,
>>> + ITEM_INTEGRITY_LEVEL,
>>> + ITEM_INTEGRITY_VALUE,
>>>
>>> /* Validate/create actions. */
>>> ACTIONS,
>>> @@ -956,6 +959,7 @@ static const enum index next_item[] = {
>>> ITEM_PFCP,
>>> ITEM_ECPRI,
>>> ITEM_GENEVE_OPT,
>>> + ITEM_INTEGRITY,
>>> END_SET,
>>> ZERO,
>>> };
>>> @@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
>>> ZERO,
>>> };
>>>
>>> +static const enum index item_integrity[] = {
>>> + ITEM_INTEGRITY_LEVEL,
>>> + ITEM_INTEGRITY_VALUE,
>>> + ZERO,
>>> +};
>>> +
>>> +static const enum index item_integrity_lv[] = {
>>> + ITEM_INTEGRITY_LEVEL,
>>> + ITEM_INTEGRITY_VALUE,
>>> + ITEM_NEXT,
>>> + ZERO,
>>> +};
>>> +
>>> static const enum index next_action[] = {
>>> ACTION_END,
>>> ACTION_VOID,
>>> @@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
>>> (sizeof(struct rte_flow_item_geneve_opt),
>>> ITEM_GENEVE_OPT_DATA_SIZE)),
>>> },
>>> + [ITEM_INTEGRITY] = {
>>> + .name = "integrity",
>>> + .help = "match packet integrity",
>>> + .priv = PRIV_ITEM(INTEGRITY,
>>> + sizeof(struct rte_flow_item_integrity)),
>>> + .next = NEXT(item_integrity),
>>> + .call = parse_vc,
>>> + },
>>> + [ITEM_INTEGRITY_LEVEL] = {
>>> + .name = "level",
>>> + .help = "integrity level",
>>> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
>>> + item_param),
>>> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity,
>> level)),
>>> + },
>>> + [ITEM_INTEGRITY_VALUE] = {
>>> + .name = "value",
>>> + .help = "integrity value",
>>> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
>>> + item_param),
>>> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity,
>> value)),
>>> + },
>>> /* Validate/create actions. */
>>> [ACTIONS] = {
>>> .name = "actions",
>>>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-13 8:03 ` Ferruh Yigit
@ 2021-04-13 8:18 ` Ori Kam
2021-04-13 8:30 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-13 8:18 UTC (permalink / raw)
To: Ferruh Yigit, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Qi Zhang
Hi Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>
> On 4/13/2021 8:12 AM, Ori Kam wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
> >>
> >> On 4/12/2021 8:26 PM, Ori Kam wrote:
> >>> Hi Ferruh,
> >>>
> >>>> -----Original Message-----
> >>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
> >>>>
> >>>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> >>>>> From: Ori Kam <orika@nvidia.com>
> >>>>>
> >>>>> Currently, DPDK application can offload the checksum check,
> >>>>> and report it in the mbuf.
> >>>>>
> >>>>> However, as more and more applications are offloading some or all
> >>>>> logic and action to the HW, there is a need to check the packet
> >>>>> integrity so the right decision can be taken.
> >>>>>
> >>>>> The application logic can be positive meaning if the packet is
> >>>>> valid jump / do actions, or negative if packet is not valid
> >>>>> jump to SW / do actions (like drop) a, and add default flow
> >>>>> (match all in low priority) that will direct the miss packet
> >>>>> to the miss path.
> >>>>>
> >>>>> Since currenlty rte_flow works in positive way the assumtion is
> >>>>> that the postive way will be the common way in this case also.
> >>>>>
> >>>>> When thinking what is the best API to implement such feature,
> >>>>> we need to considure the following (in no specific order):
> >>>>> 1. API breakage.
> >>>>> 2. Simplicity.
> >>>>> 3. Performance.
> >>>>> 4. HW capabilities.
> >>>>> 5. rte_flow limitation.
> >>>>> 6. Flexability.
> >>>>>
> >>>>> First option: Add integrity flags to each of the items.
> >>>>> For example add checksum_ok to ipv4 item.
> >>>>>
> >>>>> Pros:
> >>>>> 1. No new rte_flow item.
> >>>>> 2. Simple in the way that on each item the app can see
> >>>>> what checks are available.
> >>>>>
> >>>>> Cons:
> >>>>> 1. API breakage.
> >>>>> 2. increase number of flows, since app can't add global rule and
> >>>>> must have dedicated flow for each of the flow combinations, for
> >> example
> >>>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> >>>>> result in 5 flows.
> >>>>>
> >>>>> Second option: dedicated item
> >>>>>
> >>>>> Pros:
> >>>>> 1. No API breakage, and there will be no for some time due to having
> >>>>> extra space. (by using bits)
> >>>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> >>>>> IPv6.
> >>>>> 3. Simplicity application can just look at one place to see all possible
> >>>>> checks.
> >>>>> 4. Allow future support for more tests.
> >>>>>
> >>>>> Cons:
> >>>>> 1. New item, that holds number of fields from different items.
> >>>>>
> >>>>> For starter the following bits are suggested:
> >>>>> 1. packet_ok - means that all HW checks depending on packet layer have
> >>>>> passed. This may mean that in some HW such flow should be splited
> to
> >>>>> number of flows or fail.
> >>>>> 2. l2_ok - all check flor layer 2 have passed.
> >>>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> >>>>> l3 layer this check shoudl fail.
> >>>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> >>>>> have l4 layer this check should fail.
> >>>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> >>>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> >>>>> be 0 and the l3_ok will be 0.
> >>>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> >>>>> 7. l4_csum_ok - layer 4 checksum is O.K.
> >>>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> >>>>> packet len.
> >>>>>
> >>>>> Example of usage:
> >>>>> 1. check packets from all possible layers for integrity.
> >>>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >>>>>
> >>>>> 2. Check only packet with layer 4 (UDP / TCP)
> >>>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok =
> 1
> >>>>>
> >>>>
> >>>> Hi Ori,
> >>>>
> >>>> Is the intention of the API just filtering, like apply some action to the
> >>>> packets based on their integration status. Like drop packets their l2_crc
> >>>> checksum failed? Here configuration is done by existing offload APIs.
> >>>>
> >>>> Or is the intention to configure the integration check on NIC, like to say
> >>>> enable layer 2 checks, and do the action based on integration check
> status.
> >>>>
> >>> If I understand your question the first case is the one that this patch is
> >> targeting.
> >>> meaning based on those bits route/apply actions to the packet while still in
> >> the
> >>> HW.
> >>>
> >>> This is not design to enable the queue status bits.
> >>> In the use case suggestion by this patch, just like you said the app
> >>> can decide to drop the packet before arriving to the queue, application
> may
> >> also
> >>> use the mark + queue action to mark to the SW what is the issue with this
> >> packet.
> >>>
> >>> I'm not sure I understand your comment about "here configuration is done
> by
> >> existing
> >>> offload API" do you mean like the drop / jump to table / any other rte_flow
> >> action?
> >>>
> >>>
> >>
> >> I am asking because difference between device configuration and packet
> >> filtering
> >> seems getting more blurred in the flow API.
> >>
> >> Currently L4 checksum offload is requested by application via setting
> >> 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the
> way
> >> to
> >> configure HW.
> >>
> >> Is the intention of this patch doing packet filtering after device configured
> >> with above offload API?
> >>
> >> Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is set
> >> in the rule, will it enable L4 checks first and do the filtering later?
> >>
> >> If not what is the expected behavior when integration checks are not
> enabled
> >> when the rule is created?
> >>
> >
> > Let me try to explain it in a different way:
> > When application enables 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...)
> offload flags
> > It only means that the HW will report the checksum in the mbuf.
>
> It is not only for reporting in mbuf, it also configures the HW to enable the
> relevant checks. This is for device configuration.
>
> Are these checks always enabled in mlx devices?
>
Yes, they are always on.
The RX offload just enables the RX burst to update the mbuf.
In any case if HW needs to be enabled, then
the PMD can enable the HW when first flow is inserted and just not
copy the value the mbuf.
> > Lets call this mode RX queue offload.
> >
> > Now I'm introducing rte_flow offload,
> > This means that the application can create rte_flow that matches those
> > bits and based on that take different actions that are defined by the rte_flow.
> > Example for such a flow
> > Flow create 0 ingress pattern integrity spec packet_ok = 0 mask packet_ok =
> 1 / end actions count / drop / end
> >
> > Offloading such a flow will result that all invalid packets will be counted and
> dropped, in the HW
> > so even if the RX queue offload was enabled, no invalid packets will arrive to
> the application.
> >
> > In other case lets assume that the application wants all valid packets to jump
> to the next group,
> > and all the reset of the packets will go to SW. (we can assume that later in
> the pipeline also valid packets
> > will arrive to the application)
> > Flow create 0 ingress pattern integrity spec packet_ok = 1 mask packet_ok =
> 1 / end actions jump group 1 / end
> > Flow create 0 priority 1 pattern eth / end actions rss / end
> >
> > In this case if the application enabled the RX offload then if the application
> will receive invalid packet
> > also the flag in the mbuf will be set.
> >
>
> If application not enabled the Rx offload that means HW checks are not
> enabled,
> so HW can't know if a packet is OK or not, right.
> In that case, for above rule, I expect none of the packets to match, so none
> will jump to next group. Are we in the same page here?
>
> Or do you expect above rule configure the HW to enable the relevant HW
> checks first?
>
If this is required by HW then yes.
please see my answer above.
> > As you can see those two offloads mode are complementary to each other
> and one doesn't force the other one in any
> > way.
> >
> > I hope this is clearer.
> >
> >
> >>>>
> >>>>> Signed-off-by: Ori Kam <orika@nvidia.com>
> >>>>> ---
> >>>>> v2: fix compilation error
> >>>>> ---
> >>>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
> >>>>> lib/librte_ethdev/rte_flow.h | 47
> >> ++++++++++++++++++++++++++++++
> >>>>> 2 files changed, 66 insertions(+)
> >>>>>
> >>>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
> >>>> b/doc/guides/prog_guide/rte_flow.rst
> >>>>> index e1b93ecedf..87ef591405 100644
> >>>>> --- a/doc/guides/prog_guide/rte_flow.rst
> >>>>> +++ b/doc/guides/prog_guide/rte_flow.rst
> >>>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> >>>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> >>>>> - Default ``mask`` matches nothing, for all eCPRI messages.
> >>>>>
> >>>>> +Item: ``PACKET_INTEGRITY_CHECKS``
> >>>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>>>> +
> >>>>> +Matches packet integrity.
> >>>>> +
> >>>>> +- ``level``: the encapsulation level that should be checked. level 0
> means
> >> the
> >>>>> + default PMD mode (Can be inner most / outermost). value of 1 means
> >>>> outermost
> >>>>> + and higher value means inner header. See also RSS level.
> >>>>> +- ``packet_ok``: All HW packet integrity checks have passed based on
> the
> >>>> max
> >>>>> + layer of the packet.
> >>>>> + layer of the packet.
> >>>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> >>>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> >>>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> >>>>
> >>>> s/layer 3/ layer 4/
> >>>>
> >>> Will fix.
> >>>
> >>>>> +- ``l2_crc_ok``: layer 2 crc check passed.
> >>>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> >>>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
> >>>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> >>>>> +
> >>>>> Actions
> >>>>> ~~~~~~~
> >>>>>
> >>>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> >>>>> index 6cc57136ac..77471af2c4 100644
> >>>>> --- a/lib/librte_ethdev/rte_flow.h
> >>>>> +++ b/lib/librte_ethdev/rte_flow.h
> >>>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> >>>>> * See struct rte_flow_item_geneve_opt
> >>>>> */
> >>>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> >>>>> +
> >>>>> + /**
> >>>>> + * [META]
> >>>>> + *
> >>>>> + * Matches on packet integrity.
> >>>>> + *
> >>>>> + * See struct rte_flow_item_integrity.
> >>>>> + */
> >>>>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> >>>>> };
> >>>>>
> >>>>> /**
> >>>>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
> >>>>> };
> >>>>> #endif
> >>>>>
> >>>>> +__extension__
> >>>>> +struct rte_flow_item_integrity {
> >>>>> + uint32_t level;
> >>>>> + /**< Packet encapsulation level the item should apply to.
> >>>>> + * @see rte_flow_action_rss
> >>>>> + */
> >>>>> + union {
> >>>>> + struct {
> >>>>> + uint64_t packet_ok:1;
> >>>>> + /** The packet is valid after passing all HW
> checks. */
> >>>>> + uint64_t l2_ok:1;
> >>>>> + /**< L2 layer is valid after passing all HW
> checks. */
> >>>>> + uint64_t l3_ok:1;
> >>>>> + /**< L3 layer is valid after passing all HW
> checks. */
> >>>>> + uint64_t l4_ok:1;
> >>>>> + /**< L4 layer is valid after passing all HW
> checks. */
> >>>>> + uint64_t l2_crc_ok:1;
> >>>>> + /**< L2 layer checksum is valid. */
> >>>>> + uint64_t ipv4_csum_ok:1;
> >>>>> + /**< L3 layer checksum is valid. */
> >>>>> + uint64_t l4_csum_ok:1;
> >>>>> + /**< L4 layer checksum is valid. */
> >>>>> + uint64_t l3_len_ok:1;
> >>>>> + /**< The l3 len is smaller than the packet len.
> */
> >>>>
> >>>> packet len?
> >>>>
> >>> Do you mean replace the l3_len_ok with packet len?
> >>
> >> no, I was trying to ask what is "packet len" here? frame length, or mbuf
> buffer
> >> length, or something else?
> >>
> > Frame length.
> >
> >>> My only issue is that the check is comparing the l3 len to the packet len.
> >>>
> >>> If you still think it is better to call it packet len, I'm also O.K with it.
> >>>
> >>>>> + uint64_t reserved:56;
> >>>>> + };
> >>>>> + uint64_t value;
> >>>>> + };
> >>>>> +};
> >>>>> +
> >>>>> +#ifndef __cplusplus
> >>>>> +static const struct rte_flow_item_integrity
> >>>>> +rte_flow_item_integrity_mask = {
> >>>>> + .level = 0,
> >>>>> + .value = 0,
> >>>>> +};
> >>>>> +#endif
> >>>>> +
> >>>>> /**
> >>>>> * Matching pattern item definition.
> >>>>> *
> >>>>>
> >>>
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-13 8:18 ` Ori Kam
@ 2021-04-13 8:30 ` Ferruh Yigit
2021-04-13 10:21 ` Ori Kam
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-13 8:30 UTC (permalink / raw)
To: Ori Kam, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Qi Zhang
On 4/13/2021 9:18 AM, Ori Kam wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>
>> On 4/13/2021 8:12 AM, Ori Kam wrote:
>>> Hi Ferruh,
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>>>
>>>> On 4/12/2021 8:26 PM, Ori Kam wrote:
>>>>> Hi Ferruh,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>>>>>
>>>>>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
>>>>>>> From: Ori Kam <orika@nvidia.com>
>>>>>>>
>>>>>>> Currently, DPDK application can offload the checksum check,
>>>>>>> and report it in the mbuf.
>>>>>>>
>>>>>>> However, as more and more applications are offloading some or all
>>>>>>> logic and action to the HW, there is a need to check the packet
>>>>>>> integrity so the right decision can be taken.
>>>>>>>
>>>>>>> The application logic can be positive meaning if the packet is
>>>>>>> valid jump / do actions, or negative if packet is not valid
>>>>>>> jump to SW / do actions (like drop) a, and add default flow
>>>>>>> (match all in low priority) that will direct the miss packet
>>>>>>> to the miss path.
>>>>>>>
>>>>>>> Since currenlty rte_flow works in positive way the assumtion is
>>>>>>> that the postive way will be the common way in this case also.
>>>>>>>
>>>>>>> When thinking what is the best API to implement such feature,
>>>>>>> we need to considure the following (in no specific order):
>>>>>>> 1. API breakage.
>>>>>>> 2. Simplicity.
>>>>>>> 3. Performance.
>>>>>>> 4. HW capabilities.
>>>>>>> 5. rte_flow limitation.
>>>>>>> 6. Flexability.
>>>>>>>
>>>>>>> First option: Add integrity flags to each of the items.
>>>>>>> For example add checksum_ok to ipv4 item.
>>>>>>>
>>>>>>> Pros:
>>>>>>> 1. No new rte_flow item.
>>>>>>> 2. Simple in the way that on each item the app can see
>>>>>>> what checks are available.
>>>>>>>
>>>>>>> Cons:
>>>>>>> 1. API breakage.
>>>>>>> 2. increase number of flows, since app can't add global rule and
>>>>>>> must have dedicated flow for each of the flow combinations, for
>>>> example
>>>>>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
>>>>>>> result in 5 flows.
>>>>>>>
>>>>>>> Second option: dedicated item
>>>>>>>
>>>>>>> Pros:
>>>>>>> 1. No API breakage, and there will be no for some time due to having
>>>>>>> extra space. (by using bits)
>>>>>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>>>>>> IPv6.
>>>>>>> 3. Simplicity application can just look at one place to see all possible
>>>>>>> checks.
>>>>>>> 4. Allow future support for more tests.
>>>>>>>
>>>>>>> Cons:
>>>>>>> 1. New item, that holds number of fields from different items.
>>>>>>>
>>>>>>> For starter the following bits are suggested:
>>>>>>> 1. packet_ok - means that all HW checks depending on packet layer have
>>>>>>> passed. This may mean that in some HW such flow should be splited
>> to
>>>>>>> number of flows or fail.
>>>>>>> 2. l2_ok - all check flor layer 2 have passed.
>>>>>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
>>>>>>> l3 layer this check shoudl fail.
>>>>>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
>>>>>>> have l4 layer this check should fail.
>>>>>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
>>>>>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
>>>>>>> be 0 and the l3_ok will be 0.
>>>>>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
>>>>>>> 7. l4_csum_ok - layer 4 checksum is O.K.
>>>>>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>>>>>> packet len.
>>>>>>>
>>>>>>> Example of usage:
>>>>>>> 1. check packets from all possible layers for integrity.
>>>>>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>>>>>
>>>>>>> 2. Check only packet with layer 4 (UDP / TCP)
>>>>>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok =
>> 1
>>>>>>>
>>>>>>
>>>>>> Hi Ori,
>>>>>>
>>>>>> Is the intention of the API just filtering, like apply some action to the
>>>>>> packets based on their integration status. Like drop packets their l2_crc
>>>>>> checksum failed? Here configuration is done by existing offload APIs.
>>>>>>
>>>>>> Or is the intention to configure the integration check on NIC, like to say
>>>>>> enable layer 2 checks, and do the action based on integration check
>> status.
>>>>>>
>>>>> If I understand your question the first case is the one that this patch is
>>>> targeting.
>>>>> meaning based on those bits route/apply actions to the packet while still in
>>>> the
>>>>> HW.
>>>>>
>>>>> This is not design to enable the queue status bits.
>>>>> In the use case suggestion by this patch, just like you said the app
>>>>> can decide to drop the packet before arriving to the queue, application
>> may
>>>> also
>>>>> use the mark + queue action to mark to the SW what is the issue with this
>>>> packet.
>>>>>
>>>>> I'm not sure I understand your comment about "here configuration is done
>> by
>>>> existing
>>>>> offload API" do you mean like the drop / jump to table / any other rte_flow
>>>> action?
>>>>>
>>>>>
>>>>
>>>> I am asking because difference between device configuration and packet
>>>> filtering
>>>> seems getting more blurred in the flow API.
>>>>
>>>> Currently L4 checksum offload is requested by application via setting
>>>> 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the
>> way
>>>> to
>>>> configure HW.
>>>>
>>>> Is the intention of this patch doing packet filtering after device configured
>>>> with above offload API?
>>>>
>>>> Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is set
>>>> in the rule, will it enable L4 checks first and do the filtering later?
>>>>
>>>> If not what is the expected behavior when integration checks are not
>> enabled
>>>> when the rule is created?
>>>>
>>>
>>> Let me try to explain it in a different way:
>>> When application enables 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...)
>> offload flags
>>> It only means that the HW will report the checksum in the mbuf.
>>
>> It is not only for reporting in mbuf, it also configures the HW to enable the
>> relevant checks. This is for device configuration.
>>
>> Are these checks always enabled in mlx devices?
>>
> Yes, they are always on.
> The RX offload just enables the RX burst to update the mbuf.
> In any case if HW needs to be enabled, then
> the PMD can enable the HW when first flow is inserted and just not
> copy the value the mbuf.
>
This was my initial question, if the new rule is just for filtering or
"configuring device + filtering".
So, two questions:
1) Do we want to duplicate the way to configure the HW checks
2) Do we want want to use flow API as a device configuration interface, (as said
before I am feeling we are going that path more and more)
Since the HW is always on in your case that is not a big problem for you but (1)
can cause trouble for other vendors, for the cases like if HW check enabled via
flow API, later it is disabled by clearing the offload flag, what will be the
status of the flow rule?
>>> Lets call this mode RX queue offload.
>>>
>>> Now I'm introducing rte_flow offload,
>>> This means that the application can create rte_flow that matches those
>>> bits and based on that take different actions that are defined by the rte_flow.
>>> Example for such a flow
>>> Flow create 0 ingress pattern integrity spec packet_ok = 0 mask packet_ok =
>> 1 / end actions count / drop / end
>>>
>>> Offloading such a flow will result that all invalid packets will be counted and
>> dropped, in the HW
>>> so even if the RX queue offload was enabled, no invalid packets will arrive to
>> the application.
>>>
>>> In other case lets assume that the application wants all valid packets to jump
>> to the next group,
>>> and all the reset of the packets will go to SW. (we can assume that later in
>> the pipeline also valid packets
>>> will arrive to the application)
>>> Flow create 0 ingress pattern integrity spec packet_ok = 1 mask packet_ok =
>> 1 / end actions jump group 1 / end
>>> Flow create 0 priority 1 pattern eth / end actions rss / end
>>>
>>> In this case if the application enabled the RX offload then if the application
>> will receive invalid packet
>>> also the flag in the mbuf will be set.
>>>
>>
>> If application not enabled the Rx offload that means HW checks are not
>> enabled,
>> so HW can't know if a packet is OK or not, right.
>> In that case, for above rule, I expect none of the packets to match, so none
>> will jump to next group. Are we in the same page here?
>>
>> Or do you expect above rule configure the HW to enable the relevant HW
>> checks first?
>>
> If this is required by HW then yes.
> please see my answer above.
>
>>> As you can see those two offloads mode are complementary to each other
>> and one doesn't force the other one in any
>>> way.
>>>
>>> I hope this is clearer.
>>>
>>>
>>>>>>
>>>>>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>>>>>> ---
>>>>>>> v2: fix compilation error
>>>>>>> ---
>>>>>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
>>>>>>> lib/librte_ethdev/rte_flow.h | 47
>>>> ++++++++++++++++++++++++++++++
>>>>>>> 2 files changed, 66 insertions(+)
>>>>>>>
>>>>>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>>>>>> b/doc/guides/prog_guide/rte_flow.rst
>>>>>>> index e1b93ecedf..87ef591405 100644
>>>>>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>>>>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>>>>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>>>>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>>>>>> - Default ``mask`` matches nothing, for all eCPRI messages.
>>>>>>>
>>>>>>> +Item: ``PACKET_INTEGRITY_CHECKS``
>>>>>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>>>>> +
>>>>>>> +Matches packet integrity.
>>>>>>> +
>>>>>>> +- ``level``: the encapsulation level that should be checked. level 0
>> means
>>>> the
>>>>>>> + default PMD mode (Can be inner most / outermost). value of 1 means
>>>>>> outermost
>>>>>>> + and higher value means inner header. See also RSS level.
>>>>>>> +- ``packet_ok``: All HW packet integrity checks have passed based on
>> the
>>>>>> max
>>>>>>> + layer of the packet.
>>>>>>> + layer of the packet.
>>>>>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
>>>>>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
>>>>>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
>>>>>>
>>>>>> s/layer 3/ layer 4/
>>>>>>
>>>>> Will fix.
>>>>>
>>>>>>> +- ``l2_crc_ok``: layer 2 crc check passed.
>>>>>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
>>>>>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
>>>>>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
>>>>>>> +
>>>>>>> Actions
>>>>>>> ~~~~~~~
>>>>>>>
>>>>>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
>>>>>>> index 6cc57136ac..77471af2c4 100644
>>>>>>> --- a/lib/librte_ethdev/rte_flow.h
>>>>>>> +++ b/lib/librte_ethdev/rte_flow.h
>>>>>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
>>>>>>> * See struct rte_flow_item_geneve_opt
>>>>>>> */
>>>>>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
>>>>>>> +
>>>>>>> + /**
>>>>>>> + * [META]
>>>>>>> + *
>>>>>>> + * Matches on packet integrity.
>>>>>>> + *
>>>>>>> + * See struct rte_flow_item_integrity.
>>>>>>> + */
>>>>>>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
>>>>>>> };
>>>>>>>
>>>>>>> /**
>>>>>>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
>>>>>>> };
>>>>>>> #endif
>>>>>>>
>>>>>>> +__extension__
>>>>>>> +struct rte_flow_item_integrity {
>>>>>>> + uint32_t level;
>>>>>>> + /**< Packet encapsulation level the item should apply to.
>>>>>>> + * @see rte_flow_action_rss
>>>>>>> + */
>>>>>>> + union {
>>>>>>> + struct {
>>>>>>> + uint64_t packet_ok:1;
>>>>>>> + /** The packet is valid after passing all HW
>> checks. */
>>>>>>> + uint64_t l2_ok:1;
>>>>>>> + /**< L2 layer is valid after passing all HW
>> checks. */
>>>>>>> + uint64_t l3_ok:1;
>>>>>>> + /**< L3 layer is valid after passing all HW
>> checks. */
>>>>>>> + uint64_t l4_ok:1;
>>>>>>> + /**< L4 layer is valid after passing all HW
>> checks. */
>>>>>>> + uint64_t l2_crc_ok:1;
>>>>>>> + /**< L2 layer checksum is valid. */
>>>>>>> + uint64_t ipv4_csum_ok:1;
>>>>>>> + /**< L3 layer checksum is valid. */
>>>>>>> + uint64_t l4_csum_ok:1;
>>>>>>> + /**< L4 layer checksum is valid. */
>>>>>>> + uint64_t l3_len_ok:1;
>>>>>>> + /**< The l3 len is smaller than the packet len.
>> */
>>>>>>
>>>>>> packet len?
>>>>>>
>>>>> Do you mean replace the l3_len_ok with packet len?
>>>>
>>>> no, I was trying to ask what is "packet len" here? frame length, or mbuf
>> buffer
>>>> length, or something else?
>>>>
>>> Frame length.
>>>
>>>>> My only issue is that the check is comparing the l3 len to the packet len.
>>>>>
>>>>> If you still think it is better to call it packet len, I'm also O.K with it.
>>>>>
>>>>>>> + uint64_t reserved:56;
>>>>>>> + };
>>>>>>> + uint64_t value;
>>>>>>> + };
>>>>>>> +};
>>>>>>> +
>>>>>>> +#ifndef __cplusplus
>>>>>>> +static const struct rte_flow_item_integrity
>>>>>>> +rte_flow_item_integrity_mask = {
>>>>>>> + .level = 0,
>>>>>>> + .value = 0,
>>>>>>> +};
>>>>>>> +#endif
>>>>>>> +
>>>>>>> /**
>>>>>>> * Matching pattern item definition.
>>>>>>> *
>>>>>>>
>>>>>
>>>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-13 8:30 ` Ferruh Yigit
@ 2021-04-13 10:21 ` Ori Kam
2021-04-13 17:28 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-13 10:21 UTC (permalink / raw)
To: Ferruh Yigit, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Qi Zhang
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, April 13, 2021 11:30 AM
>
> On 4/13/2021 9:18 AM, Ori Kam wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
> >>
> >> On 4/13/2021 8:12 AM, Ori Kam wrote:
> >>> Hi Ferruh,
> >>>
> >>>> -----Original Message-----
> >>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
> >>>>
> >>>> On 4/12/2021 8:26 PM, Ori Kam wrote:
> >>>>> Hi Ferruh,
> >>>>>
> >>>>>> -----Original Message-----
> >>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
> >>>>>>
> >>>>>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> >>>>>>> From: Ori Kam <orika@nvidia.com>
> >>>>>>>
> >>>>>>> Currently, DPDK application can offload the checksum check,
> >>>>>>> and report it in the mbuf.
> >>>>>>>
> >>>>>>> However, as more and more applications are offloading some or all
> >>>>>>> logic and action to the HW, there is a need to check the packet
> >>>>>>> integrity so the right decision can be taken.
> >>>>>>>
> >>>>>>> The application logic can be positive meaning if the packet is
> >>>>>>> valid jump / do actions, or negative if packet is not valid
> >>>>>>> jump to SW / do actions (like drop) a, and add default flow
> >>>>>>> (match all in low priority) that will direct the miss packet
> >>>>>>> to the miss path.
> >>>>>>>
> >>>>>>> Since currenlty rte_flow works in positive way the assumtion is
> >>>>>>> that the postive way will be the common way in this case also.
> >>>>>>>
> >>>>>>> When thinking what is the best API to implement such feature,
> >>>>>>> we need to considure the following (in no specific order):
> >>>>>>> 1. API breakage.
> >>>>>>> 2. Simplicity.
> >>>>>>> 3. Performance.
> >>>>>>> 4. HW capabilities.
> >>>>>>> 5. rte_flow limitation.
> >>>>>>> 6. Flexability.
> >>>>>>>
> >>>>>>> First option: Add integrity flags to each of the items.
> >>>>>>> For example add checksum_ok to ipv4 item.
> >>>>>>>
> >>>>>>> Pros:
> >>>>>>> 1. No new rte_flow item.
> >>>>>>> 2. Simple in the way that on each item the app can see
> >>>>>>> what checks are available.
> >>>>>>>
> >>>>>>> Cons:
> >>>>>>> 1. API breakage.
> >>>>>>> 2. increase number of flows, since app can't add global rule and
> >>>>>>> must have dedicated flow for each of the flow combinations, for
> >>>> example
> >>>>>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> >>>>>>> result in 5 flows.
> >>>>>>>
> >>>>>>> Second option: dedicated item
> >>>>>>>
> >>>>>>> Pros:
> >>>>>>> 1. No API breakage, and there will be no for some time due to having
> >>>>>>> extra space. (by using bits)
> >>>>>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> >>>>>>> IPv6.
> >>>>>>> 3. Simplicity application can just look at one place to see all possible
> >>>>>>> checks.
> >>>>>>> 4. Allow future support for more tests.
> >>>>>>>
> >>>>>>> Cons:
> >>>>>>> 1. New item, that holds number of fields from different items.
> >>>>>>>
> >>>>>>> For starter the following bits are suggested:
> >>>>>>> 1. packet_ok - means that all HW checks depending on packet layer
> have
> >>>>>>> passed. This may mean that in some HW such flow should be
> splited
> >> to
> >>>>>>> number of flows or fail.
> >>>>>>> 2. l2_ok - all check flor layer 2 have passed.
> >>>>>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
> >>>>>>> l3 layer this check shoudl fail.
> >>>>>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
> >>>>>>> have l4 layer this check should fail.
> >>>>>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
> >>>>>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
> >>>>>>> be 0 and the l3_ok will be 0.
> >>>>>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
> >>>>>>> 7. l4_csum_ok - layer 4 checksum is O.K.
> >>>>>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> >>>>>>> packet len.
> >>>>>>>
> >>>>>>> Example of usage:
> >>>>>>> 1. check packets from all possible layers for integrity.
> >>>>>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >>>>>>>
> >>>>>>> 2. Check only packet with layer 4 (UDP / TCP)
> >>>>>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
> l4_ok =
> >> 1
> >>>>>>>
> >>>>>>
> >>>>>> Hi Ori,
> >>>>>>
> >>>>>> Is the intention of the API just filtering, like apply some action to the
> >>>>>> packets based on their integration status. Like drop packets their l2_crc
> >>>>>> checksum failed? Here configuration is done by existing offload APIs.
> >>>>>>
> >>>>>> Or is the intention to configure the integration check on NIC, like to say
> >>>>>> enable layer 2 checks, and do the action based on integration check
> >> status.
> >>>>>>
> >>>>> If I understand your question the first case is the one that this patch is
> >>>> targeting.
> >>>>> meaning based on those bits route/apply actions to the packet while still
> in
> >>>> the
> >>>>> HW.
> >>>>>
> >>>>> This is not design to enable the queue status bits.
> >>>>> In the use case suggestion by this patch, just like you said the app
> >>>>> can decide to drop the packet before arriving to the queue, application
> >> may
> >>>> also
> >>>>> use the mark + queue action to mark to the SW what is the issue with
> this
> >>>> packet.
> >>>>>
> >>>>> I'm not sure I understand your comment about "here configuration is
> done
> >> by
> >>>> existing
> >>>>> offload API" do you mean like the drop / jump to table / any other
> rte_flow
> >>>> action?
> >>>>>
> >>>>>
> >>>>
> >>>> I am asking because difference between device configuration and packet
> >>>> filtering
> >>>> seems getting more blurred in the flow API.
> >>>>
> >>>> Currently L4 checksum offload is requested by application via setting
> >>>> 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the
> >> way
> >>>> to
> >>>> configure HW.
> >>>>
> >>>> Is the intention of this patch doing packet filtering after device configured
> >>>> with above offload API?
> >>>>
> >>>> Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is
> set
> >>>> in the rule, will it enable L4 checks first and do the filtering later?
> >>>>
> >>>> If not what is the expected behavior when integration checks are not
> >> enabled
> >>>> when the rule is created?
> >>>>
> >>>
> >>> Let me try to explain it in a different way:
> >>> When application enables 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...)
> >> offload flags
> >>> It only means that the HW will report the checksum in the mbuf.
> >>
> >> It is not only for reporting in mbuf, it also configures the HW to enable the
> >> relevant checks. This is for device configuration.
> >>
> >> Are these checks always enabled in mlx devices?
> >>
> > Yes, they are always on.
> > The RX offload just enables the RX burst to update the mbuf.
> > In any case if HW needs to be enabled, then
> > the PMD can enable the HW when first flow is inserted and just not
> > copy the value the mbuf.
> >
>
> This was my initial question, if the new rule is just for filtering or
> "configuring device + filtering".
>
> So, two questions:
> 1) Do we want to duplicate the way to configure the HW checks
> 2) Do we want want to use flow API as a device configuration interface, (as said
> before I am feeling we are going that path more and more)
>
> Since the HW is always on in your case that is not a big problem for you but (1)
> can cause trouble for other vendors, for the cases like if HW check enabled via
> flow API, later it is disabled by clearing the offload flag, what will be the
> status of the flow rule?
>
1. from my point of view it is not duplication since if the RX offload is enabled then there is also
setting of the metadata in the RX.
While if the RX offload is not enabled then even if the HW is working there is no need to copy
the result to the mbuf fields.
2. see my response above.
I think that since rte_flow is working with the HW, at any case creating flow may need to enable some HW.
I think at any case when offloading using queues the device must be stopped (at least in some
HW) and if the device is stopped in any case all flows are removed so I don't see
the dependency between the rx queue offload and the rte_flow one.
like I said even if the HW is enabled it doesn't mean the mbuf should be updated
(the mbuf should be updated only if the queue offload was enabled. )
> >>> Lets call this mode RX queue offload.
> >>>
> >>> Now I'm introducing rte_flow offload,
> >>> This means that the application can create rte_flow that matches those
> >>> bits and based on that take different actions that are defined by the
> rte_flow.
> >>> Example for such a flow
> >>> Flow create 0 ingress pattern integrity spec packet_ok = 0 mask packet_ok
> =
> >> 1 / end actions count / drop / end
> >>>
> >>> Offloading such a flow will result that all invalid packets will be counted
> and
> >> dropped, in the HW
> >>> so even if the RX queue offload was enabled, no invalid packets will arrive
> to
> >> the application.
> >>>
> >>> In other case lets assume that the application wants all valid packets to
> jump
> >> to the next group,
> >>> and all the reset of the packets will go to SW. (we can assume that later in
> >> the pipeline also valid packets
> >>> will arrive to the application)
> >>> Flow create 0 ingress pattern integrity spec packet_ok = 1 mask packet_ok
> =
> >> 1 / end actions jump group 1 / end
> >>> Flow create 0 priority 1 pattern eth / end actions rss / end
> >>>
> >>> In this case if the application enabled the RX offload then if the application
> >> will receive invalid packet
> >>> also the flag in the mbuf will be set.
> >>>
> >>
> >> If application not enabled the Rx offload that means HW checks are not
> >> enabled,
> >> so HW can't know if a packet is OK or not, right.
> >> In that case, for above rule, I expect none of the packets to match, so none
> >> will jump to next group. Are we in the same page here?
> >>
> >> Or do you expect above rule configure the HW to enable the relevant HW
> >> checks first?
> >>
> > If this is required by HW then yes.
> > please see my answer above.
> >
> >>> As you can see those two offloads mode are complementary to each other
> >> and one doesn't force the other one in any
> >>> way.
> >>>
> >>> I hope this is clearer.
> >>>
> >>>
> >>>>>>
> >>>>>>> Signed-off-by: Ori Kam <orika@nvidia.com>
> >>>>>>> ---
> >>>>>>> v2: fix compilation error
> >>>>>>> ---
> >>>>>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
> >>>>>>> lib/librte_ethdev/rte_flow.h | 47
> >>>> ++++++++++++++++++++++++++++++
> >>>>>>> 2 files changed, 66 insertions(+)
> >>>>>>>
> >>>>>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
> >>>>>> b/doc/guides/prog_guide/rte_flow.rst
> >>>>>>> index e1b93ecedf..87ef591405 100644
> >>>>>>> --- a/doc/guides/prog_guide/rte_flow.rst
> >>>>>>> +++ b/doc/guides/prog_guide/rte_flow.rst
> >>>>>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> >>>>>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> >>>>>>> - Default ``mask`` matches nothing, for all eCPRI messages.
> >>>>>>>
> >>>>>>> +Item: ``PACKET_INTEGRITY_CHECKS``
> >>>>>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> >>>>>>> +
> >>>>>>> +Matches packet integrity.
> >>>>>>> +
> >>>>>>> +- ``level``: the encapsulation level that should be checked. level 0
> >> means
> >>>> the
> >>>>>>> + default PMD mode (Can be inner most / outermost). value of 1
> means
> >>>>>> outermost
> >>>>>>> + and higher value means inner header. See also RSS level.
> >>>>>>> +- ``packet_ok``: All HW packet integrity checks have passed based on
> >> the
> >>>>>> max
> >>>>>>> + layer of the packet.
> >>>>>>> + layer of the packet.
> >>>>>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> >>>>>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> >>>>>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
> >>>>>>
> >>>>>> s/layer 3/ layer 4/
> >>>>>>
> >>>>> Will fix.
> >>>>>
> >>>>>>> +- ``l2_crc_ok``: layer 2 crc check passed.
> >>>>>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> >>>>>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
> >>>>>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
> >>>>>>> +
> >>>>>>> Actions
> >>>>>>> ~~~~~~~
> >>>>>>>
> >>>>>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> >>>>>>> index 6cc57136ac..77471af2c4 100644
> >>>>>>> --- a/lib/librte_ethdev/rte_flow.h
> >>>>>>> +++ b/lib/librte_ethdev/rte_flow.h
> >>>>>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
> >>>>>>> * See struct rte_flow_item_geneve_opt
> >>>>>>> */
> >>>>>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> >>>>>>> +
> >>>>>>> + /**
> >>>>>>> + * [META]
> >>>>>>> + *
> >>>>>>> + * Matches on packet integrity.
> >>>>>>> + *
> >>>>>>> + * See struct rte_flow_item_integrity.
> >>>>>>> + */
> >>>>>>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> >>>>>>> };
> >>>>>>>
> >>>>>>> /**
> >>>>>>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
> >>>>>>> };
> >>>>>>> #endif
> >>>>>>>
> >>>>>>> +__extension__
> >>>>>>> +struct rte_flow_item_integrity {
> >>>>>>> + uint32_t level;
> >>>>>>> + /**< Packet encapsulation level the item should apply to.
> >>>>>>> + * @see rte_flow_action_rss
> >>>>>>> + */
> >>>>>>> + union {
> >>>>>>> + struct {
> >>>>>>> + uint64_t packet_ok:1;
> >>>>>>> + /** The packet is valid after passing all HW
> >> checks. */
> >>>>>>> + uint64_t l2_ok:1;
> >>>>>>> + /**< L2 layer is valid after passing all HW
> >> checks. */
> >>>>>>> + uint64_t l3_ok:1;
> >>>>>>> + /**< L3 layer is valid after passing all HW
> >> checks. */
> >>>>>>> + uint64_t l4_ok:1;
> >>>>>>> + /**< L4 layer is valid after passing all HW
> >> checks. */
> >>>>>>> + uint64_t l2_crc_ok:1;
> >>>>>>> + /**< L2 layer checksum is valid. */
> >>>>>>> + uint64_t ipv4_csum_ok:1;
> >>>>>>> + /**< L3 layer checksum is valid. */
> >>>>>>> + uint64_t l4_csum_ok:1;
> >>>>>>> + /**< L4 layer checksum is valid. */
> >>>>>>> + uint64_t l3_len_ok:1;
> >>>>>>> + /**< The l3 len is smaller than the packet len.
> >> */
> >>>>>>
> >>>>>> packet len?
> >>>>>>
> >>>>> Do you mean replace the l3_len_ok with packet len?
> >>>>
> >>>> no, I was trying to ask what is "packet len" here? frame length, or mbuf
> >> buffer
> >>>> length, or something else?
> >>>>
> >>> Frame length.
> >>>
> >>>>> My only issue is that the check is comparing the l3 len to the packet len.
> >>>>>
> >>>>> If you still think it is better to call it packet len, I'm also O.K with it.
> >>>>>
> >>>>>>> + uint64_t reserved:56;
> >>>>>>> + };
> >>>>>>> + uint64_t value;
> >>>>>>> + };
> >>>>>>> +};
> >>>>>>> +
> >>>>>>> +#ifndef __cplusplus
> >>>>>>> +static const struct rte_flow_item_integrity
> >>>>>>> +rte_flow_item_integrity_mask = {
> >>>>>>> + .level = 0,
> >>>>>>> + .value = 0,
> >>>>>>> +};
> >>>>>>> +#endif
> >>>>>>> +
> >>>>>>> /**
> >>>>>>> * Matching pattern item definition.
> >>>>>>> *
> >>>>>>>
> >>>>>
> >>>
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item
2021-04-13 8:14 ` Ferruh Yigit
@ 2021-04-13 11:36 ` Ori Kam
0 siblings, 0 replies; 68+ messages in thread
From: Ori Kam @ 2021-04-13 11:36 UTC (permalink / raw)
To: Ferruh Yigit, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Xiaoyun Li
Hi
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> item
>
> On 4/13/2021 8:53 AM, Ori Kam wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>
> >> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
> >>> From: Ori Kam <orika@nvidia.com>
> >>>
> >>> The integrity item allows the application to match
> >>> on the integrity of a packet.
> >>>
> >>> use example:
> >>> match that packet integrity checks are ok. The checks depend on
> >>> packet layers. For example ICMP packet will not check L4 level.
> >>> flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
> >>> match that L4 packet is ok - check L2 & L3 & L4 layers:
> >>> flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
> >>>
> >>> Signed-off-by: Ori Kam <orika@nvidia.com>
> >>> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> >>> ---
> >>> v2 add testpmd patch
> >>> ---
> >>> app/test-pmd/cmdline_flow.c | 39
> >> +++++++++++++++++++++++++++++++++++++
> >>
> >> Hi Gregory, Ori,
> >>
> >> Can you add some samples to "testpmd_funcs.rst#flow-rules-management"?
> >>
> >> I asked in some other thread but did not get any response, what do you think
> to
> >> make 'testpmd_funcs.rst' sample update mandatory when testpmd flow
> added?
> >>
> > I fully agree that each new function should be mandatory,
>
> What is new function here, new flow API? That should go to flow API
> documentation, 'rte_flow.rst'.
>
I mean something like for example new set,
create_shared_action..
I mean new total new commands.
Does it make sense?
> > The question is do we want that each new item / action (they use existing
> function)
> > I think it is a bit of overhead but I don't have strong opinion.
> >
>
> Since the documentation is for the testpmd usage sample, I was thinking to add
> sample for each new item & action indeed.
> Same of the flow rules not widely used, and it is not always clear how to use
> them, that is why I believe documenting samples can help.
>
I fully agree with you, the question is how to do it,
since in some cases it is jut one line of code,
and in other cases it can be much more complex for example raw_encap,
the new Conntrack action.
I think we should think how we improve the examples in the rte_flow context,
> >>
> >>> 1 file changed, 39 insertions(+)
> >>>
> >>> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> >>> index fb7a3a8bd3..b5dec34325 100644
> >>> --- a/app/test-pmd/cmdline_flow.c
> >>> +++ b/app/test-pmd/cmdline_flow.c
> >>> @@ -289,6 +289,9 @@ enum index {
> >>> ITEM_GENEVE_OPT_TYPE,
> >>> ITEM_GENEVE_OPT_LENGTH,
> >>> ITEM_GENEVE_OPT_DATA,
> >>> + ITEM_INTEGRITY,
> >>> + ITEM_INTEGRITY_LEVEL,
> >>> + ITEM_INTEGRITY_VALUE,
> >>>
> >>> /* Validate/create actions. */
> >>> ACTIONS,
> >>> @@ -956,6 +959,7 @@ static const enum index next_item[] = {
> >>> ITEM_PFCP,
> >>> ITEM_ECPRI,
> >>> ITEM_GENEVE_OPT,
> >>> + ITEM_INTEGRITY,
> >>> END_SET,
> >>> ZERO,
> >>> };
> >>> @@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
> >>> ZERO,
> >>> };
> >>>
> >>> +static const enum index item_integrity[] = {
> >>> + ITEM_INTEGRITY_LEVEL,
> >>> + ITEM_INTEGRITY_VALUE,
> >>> + ZERO,
> >>> +};
> >>> +
> >>> +static const enum index item_integrity_lv[] = {
> >>> + ITEM_INTEGRITY_LEVEL,
> >>> + ITEM_INTEGRITY_VALUE,
> >>> + ITEM_NEXT,
> >>> + ZERO,
> >>> +};
> >>> +
> >>> static const enum index next_action[] = {
> >>> ACTION_END,
> >>> ACTION_VOID,
> >>> @@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
> >>> (sizeof(struct rte_flow_item_geneve_opt),
> >>> ITEM_GENEVE_OPT_DATA_SIZE)),
> >>> },
> >>> + [ITEM_INTEGRITY] = {
> >>> + .name = "integrity",
> >>> + .help = "match packet integrity",
> >>> + .priv = PRIV_ITEM(INTEGRITY,
> >>> + sizeof(struct rte_flow_item_integrity)),
> >>> + .next = NEXT(item_integrity),
> >>> + .call = parse_vc,
> >>> + },
> >>> + [ITEM_INTEGRITY_LEVEL] = {
> >>> + .name = "level",
> >>> + .help = "integrity level",
> >>> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> >>> + item_param),
> >>> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity,
> >> level)),
> >>> + },
> >>> + [ITEM_INTEGRITY_VALUE] = {
> >>> + .name = "value",
> >>> + .help = "integrity value",
> >>> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> >>> + item_param),
> >>> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity,
> >> value)),
> >>> + },
> >>> /* Validate/create actions. */
> >>> [ACTIONS] = {
> >>> .name = "actions",
> >>>
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 0/2] add packet integrity checks
2021-04-07 10:32 ` Ori Kam
2021-04-07 11:01 ` Jerin Jacob
@ 2021-04-13 15:16 ` Gregory Etelson
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 1/2] ethdev: " Gregory Etelson
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 0/2] add packet integrity checks Gregory Etelson
` (4 subsequent siblings)
6 siblings, 2 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-13 15:16 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
V3: update testpmd user guide.
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 19 +++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 +++
lib/librte_ethdev/rte_flow.h | 47 +++++++++++++++++++++
4 files changed, 112 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 1/2] ethdev: add packet integrity checks
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 0/2] " Gregory Etelson
@ 2021-04-13 15:16 ` Gregory Etelson
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item Gregory Etelson
1 sibling, 0 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-13 15:16 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) a, and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currenlty rte_flow works in positive way the assumtion is
that the postive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to considure the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexability.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doens't have
l3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
frame len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
lib/librte_ethdev/rte_flow.h | 47 ++++++++++++++++++++++++++++++
2 files changed, 66 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..cccc0cfd05 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,25 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 4 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
+
Actions
~~~~~~~
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index c476a0f59d..0dbc5e0c44 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,15 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ *
+ * See struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+__extension__
+struct rte_flow_item_integrity {
+ uint32_t level;
+ /**< Packet encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ union {
+ struct {
+ uint64_t packet_ok:1;
+ /** The packet is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l2_crc_ok:1;
+ /**< L2 layer crc is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< IPv4 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l3_len_ok:1;
+ /**< The l3 len is smaller than the frame len. */
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 0/2] " Gregory Etelson
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 1/2] ethdev: " Gregory Etelson
@ 2021-04-13 15:16 ` Gregory Etelson
2021-04-13 17:15 ` Ferruh Yigit
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-13 15:16 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++++
2 files changed, 46 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..b5dec34325 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -289,6 +289,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -956,6 +959,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 36f0a328a5..f1ad674336 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3783,6 +3783,13 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``integrity``: match packet integrity.
+
+ - ``level {unsigned}``: Packet encapsulation level the item should
+ apply to. See rte_flow_action_rss for details.
+ - ``value {unsigned}``: A bitmask that specify what packet elements
+ must be matched for integrity.
+
Actions list
^^^^^^^^^^^^
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item Gregory Etelson
@ 2021-04-13 17:15 ` Ferruh Yigit
0 siblings, 0 replies; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-13 17:15 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, jerinjacobk,
olivier.matz, thomas, viacheslavo, matan, rasland, Xiaoyun Li
On 4/13/2021 4:16 PM, Gregory Etelson wrote:
> From: Ori Kam <orika@nvidia.com>
>
> The integrity item allows the application to match
> on the integrity of a packet.
>
> use example:
> match that packet integrity checks are ok. The checks depend on
> packet layers. For example ICMP packet will not check L4 level.
> flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
> match that L4 packet is ok - check L2 & L3 & L4 layers:
> flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> ---
> app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
> doc/guides/testpmd_app_ug/testpmd_funcs.rst | 7 ++++
> 2 files changed, 46 insertions(+)
>
> diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
> index fb7a3a8bd3..b5dec34325 100644
> --- a/app/test-pmd/cmdline_flow.c
> +++ b/app/test-pmd/cmdline_flow.c
> @@ -289,6 +289,9 @@ enum index {
> ITEM_GENEVE_OPT_TYPE,
> ITEM_GENEVE_OPT_LENGTH,
> ITEM_GENEVE_OPT_DATA,
> + ITEM_INTEGRITY,
> + ITEM_INTEGRITY_LEVEL,
> + ITEM_INTEGRITY_VALUE,
>
> /* Validate/create actions. */
> ACTIONS,
> @@ -956,6 +959,7 @@ static const enum index next_item[] = {
> ITEM_PFCP,
> ITEM_ECPRI,
> ITEM_GENEVE_OPT,
> + ITEM_INTEGRITY,
> END_SET,
> ZERO,
> };
> @@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
> ZERO,
> };
>
> +static const enum index item_integrity[] = {
> + ITEM_INTEGRITY_LEVEL,
> + ITEM_INTEGRITY_VALUE,
> + ZERO,
> +};
> +
> +static const enum index item_integrity_lv[] = {
> + ITEM_INTEGRITY_LEVEL,
> + ITEM_INTEGRITY_VALUE,
> + ITEM_NEXT,
> + ZERO,
> +};
> +
> static const enum index next_action[] = {
> ACTION_END,
> ACTION_VOID,
> @@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
> (sizeof(struct rte_flow_item_geneve_opt),
> ITEM_GENEVE_OPT_DATA_SIZE)),
> },
> + [ITEM_INTEGRITY] = {
> + .name = "integrity",
> + .help = "match packet integrity",
> + .priv = PRIV_ITEM(INTEGRITY,
> + sizeof(struct rte_flow_item_integrity)),
> + .next = NEXT(item_integrity),
> + .call = parse_vc,
> + },
> + [ITEM_INTEGRITY_LEVEL] = {
> + .name = "level",
> + .help = "integrity level",
> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> + item_param),
> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
> + },
> + [ITEM_INTEGRITY_VALUE] = {
> + .name = "value",
> + .help = "integrity value",
> + .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
> + item_param),
> + .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
> + },
> /* Validate/create actions. */
> [ACTIONS] = {
> .name = "actions",
> diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> index 36f0a328a5..f1ad674336 100644
> --- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> +++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
> @@ -3783,6 +3783,13 @@ This section lists supported pattern items and their attributes, if any.
> - ``s_field {unsigned}``: S field.
> - ``seid {unsigned}``: session endpoint identifier.
>
> +- ``integrity``: match packet integrity.
> +
> + - ``level {unsigned}``: Packet encapsulation level the item should
> + apply to. See rte_flow_action_rss for details.
> + - ``value {unsigned}``: A bitmask that specify what packet elements
> + must be matched for integrity.
> +
> Actions list
> ^^^^^^^^^^^^
>
I was thinking about adding same sample rules, if you check through the end of
this same documentation, there are various sections to document samples of
various flow commands. Can it be possible to add some samples for the integrity
flow rules?
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] ethdev: add packet integrity checks
2021-04-13 10:21 ` Ori Kam
@ 2021-04-13 17:28 ` Ferruh Yigit
0 siblings, 0 replies; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-13 17:28 UTC (permalink / raw)
To: Ori Kam, Gregory Etelson
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, olivier.matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Qi Zhang
On 4/13/2021 11:21 AM, Ori Kam wrote:
>
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Tuesday, April 13, 2021 11:30 AM
>>
>> On 4/13/2021 9:18 AM, Ori Kam wrote:
>>> Hi Ferruh,
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>>>
>>>> On 4/13/2021 8:12 AM, Ori Kam wrote:
>>>>> Hi Ferruh,
>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>>>>>
>>>>>> On 4/12/2021 8:26 PM, Ori Kam wrote:
>>>>>>> Hi Ferruh,
>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>>>> Subject: Re: [PATCH v2 1/2] ethdev: add packet integrity checks
>>>>>>>>
>>>>>>>> On 4/11/2021 6:34 PM, Gregory Etelson wrote:
>>>>>>>>> From: Ori Kam <orika@nvidia.com>
>>>>>>>>>
>>>>>>>>> Currently, DPDK application can offload the checksum check,
>>>>>>>>> and report it in the mbuf.
>>>>>>>>>
>>>>>>>>> However, as more and more applications are offloading some or all
>>>>>>>>> logic and action to the HW, there is a need to check the packet
>>>>>>>>> integrity so the right decision can be taken.
>>>>>>>>>
>>>>>>>>> The application logic can be positive meaning if the packet is
>>>>>>>>> valid jump / do actions, or negative if packet is not valid
>>>>>>>>> jump to SW / do actions (like drop) a, and add default flow
>>>>>>>>> (match all in low priority) that will direct the miss packet
>>>>>>>>> to the miss path.
>>>>>>>>>
>>>>>>>>> Since currenlty rte_flow works in positive way the assumtion is
>>>>>>>>> that the postive way will be the common way in this case also.
>>>>>>>>>
>>>>>>>>> When thinking what is the best API to implement such feature,
>>>>>>>>> we need to considure the following (in no specific order):
>>>>>>>>> 1. API breakage.
>>>>>>>>> 2. Simplicity.
>>>>>>>>> 3. Performance.
>>>>>>>>> 4. HW capabilities.
>>>>>>>>> 5. rte_flow limitation.
>>>>>>>>> 6. Flexability.
>>>>>>>>>
>>>>>>>>> First option: Add integrity flags to each of the items.
>>>>>>>>> For example add checksum_ok to ipv4 item.
>>>>>>>>>
>>>>>>>>> Pros:
>>>>>>>>> 1. No new rte_flow item.
>>>>>>>>> 2. Simple in the way that on each item the app can see
>>>>>>>>> what checks are available.
>>>>>>>>>
>>>>>>>>> Cons:
>>>>>>>>> 1. API breakage.
>>>>>>>>> 2. increase number of flows, since app can't add global rule and
>>>>>>>>> must have dedicated flow for each of the flow combinations, for
>>>>>> example
>>>>>>>>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
>>>>>>>>> result in 5 flows.
>>>>>>>>>
>>>>>>>>> Second option: dedicated item
>>>>>>>>>
>>>>>>>>> Pros:
>>>>>>>>> 1. No API breakage, and there will be no for some time due to having
>>>>>>>>> extra space. (by using bits)
>>>>>>>>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>>>>>>>>> IPv6.
>>>>>>>>> 3. Simplicity application can just look at one place to see all possible
>>>>>>>>> checks.
>>>>>>>>> 4. Allow future support for more tests.
>>>>>>>>>
>>>>>>>>> Cons:
>>>>>>>>> 1. New item, that holds number of fields from different items.
>>>>>>>>>
>>>>>>>>> For starter the following bits are suggested:
>>>>>>>>> 1. packet_ok - means that all HW checks depending on packet layer
>> have
>>>>>>>>> passed. This may mean that in some HW such flow should be
>> splited
>>>> to
>>>>>>>>> number of flows or fail.
>>>>>>>>> 2. l2_ok - all check flor layer 2 have passed.
>>>>>>>>> 3. l3_ok - all check flor layer 2 have passed. If packet doens't have
>>>>>>>>> l3 layer this check shoudl fail.
>>>>>>>>> 4. l4_ok - all check flor layer 2 have passed. If packet doesn't
>>>>>>>>> have l4 layer this check should fail.
>>>>>>>>> 5. l2_crc_ok - the layer 2 crc is O.K. it is possible that the crc will
>>>>>>>>> be O.K. but the l3_ok will be 0. it is not possible that l2_crc_ok will
>>>>>>>>> be 0 and the l3_ok will be 0.
>>>>>>>>> 6. ipv4_csum_ok - IPv4 checksum is O.K.
>>>>>>>>> 7. l4_csum_ok - layer 4 checksum is O.K.
>>>>>>>>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>>>>>>>>> packet len.
>>>>>>>>>
>>>>>>>>> Example of usage:
>>>>>>>>> 1. check packets from all possible layers for integrity.
>>>>>>>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>>>>>>>
>>>>>>>>> 2. Check only packet with layer 4 (UDP / TCP)
>>>>>>>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
>> l4_ok =
>>>> 1
>>>>>>>>>
>>>>>>>>
>>>>>>>> Hi Ori,
>>>>>>>>
>>>>>>>> Is the intention of the API just filtering, like apply some action to the
>>>>>>>> packets based on their integration status. Like drop packets their l2_crc
>>>>>>>> checksum failed? Here configuration is done by existing offload APIs.
>>>>>>>>
>>>>>>>> Or is the intention to configure the integration check on NIC, like to say
>>>>>>>> enable layer 2 checks, and do the action based on integration check
>>>> status.
>>>>>>>>
>>>>>>> If I understand your question the first case is the one that this patch is
>>>>>> targeting.
>>>>>>> meaning based on those bits route/apply actions to the packet while still
>> in
>>>>>> the
>>>>>>> HW.
>>>>>>>
>>>>>>> This is not design to enable the queue status bits.
>>>>>>> In the use case suggestion by this patch, just like you said the app
>>>>>>> can decide to drop the packet before arriving to the queue, application
>>>> may
>>>>>> also
>>>>>>> use the mark + queue action to mark to the SW what is the issue with
>> this
>>>>>> packet.
>>>>>>>
>>>>>>> I'm not sure I understand your comment about "here configuration is
>> done
>>>> by
>>>>>> existing
>>>>>>> offload API" do you mean like the drop / jump to table / any other
>> rte_flow
>>>>>> action?
>>>>>>>
>>>>>>>
>>>>>>
>>>>>> I am asking because difference between device configuration and packet
>>>>>> filtering
>>>>>> seems getting more blurred in the flow API.
>>>>>>
>>>>>> Currently L4 checksum offload is requested by application via setting
>>>>>> 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...) offload flags. This is the
>>>> way
>>>>>> to
>>>>>> configure HW.
>>>>>>
>>>>>> Is the intention of this patch doing packet filtering after device configured
>>>>>> with above offload API?
>>>>>>
>>>>>> Or is the intention HW to be configured via flow API, like if "l4_ok = 1" is
>> set
>>>>>> in the rule, will it enable L4 checks first and do the filtering later?
>>>>>>
>>>>>> If not what is the expected behavior when integration checks are not
>>>> enabled
>>>>>> when the rule is created?
>>>>>>
>>>>>
>>>>> Let me try to explain it in a different way:
>>>>> When application enables 'DEV_RX_OFFLOAD_TCP_CKSUM' (UDP/SCTP/...)
>>>> offload flags
>>>>> It only means that the HW will report the checksum in the mbuf.
>>>>
>>>> It is not only for reporting in mbuf, it also configures the HW to enable the
>>>> relevant checks. This is for device configuration.
>>>>
>>>> Are these checks always enabled in mlx devices?
>>>>
>>> Yes, they are always on.
>>> The RX offload just enables the RX burst to update the mbuf.
>>> In any case if HW needs to be enabled, then
>>> the PMD can enable the HW when first flow is inserted and just not
>>> copy the value the mbuf.
>>>
>>
>> This was my initial question, if the new rule is just for filtering or
>> "configuring device + filtering".
>>
>> So, two questions:
>> 1) Do we want to duplicate the way to configure the HW checks
>> 2) Do we want want to use flow API as a device configuration interface, (as said
>> before I am feeling we are going that path more and more)
>>
>> Since the HW is always on in your case that is not a big problem for you but (1)
>> can cause trouble for other vendors, for the cases like if HW check enabled via
>> flow API, later it is disabled by clearing the offload flag, what will be the
>> status of the flow rule?
>>
>
> 1. from my point of view it is not duplication since if the RX offload is enabled then there is also
> setting of the metadata in the RX.
> While if the RX offload is not enabled then even if the HW is working there is no need to copy
> the result to the mbuf fields.
You are still looking offload flags as just to enable metadata setting in mbuf,
but I am more concerned about the HW configuration part which is duplicated.
If we can explicitly clarify what will be the expected behavior if the requested
integrity is not enabled by driver or not supported at all by device, I have no
objection to the set.
> 2. see my response above.
> I think that since rte_flow is working with the HW, at any case creating flow may need to enable some HW.
>
> I think at any case when offloading using queues the device must be stopped (at least in some
> HW) and if the device is stopped in any case all flows are removed so I don't see
> the dependency between the rx queue offload and the rte_flow one.
> like I said even if the HW is enabled it doesn't mean the mbuf should be updated
> (the mbuf should be updated only if the queue offload was enabled. )
>
>>>>> Lets call this mode RX queue offload.
>>>>>
>>>>> Now I'm introducing rte_flow offload,
>>>>> This means that the application can create rte_flow that matches those
>>>>> bits and based on that take different actions that are defined by the
>> rte_flow.
>>>>> Example for such a flow
>>>>> Flow create 0 ingress pattern integrity spec packet_ok = 0 mask packet_ok
>> =
>>>> 1 / end actions count / drop / end
>>>>>
>>>>> Offloading such a flow will result that all invalid packets will be counted
>> and
>>>> dropped, in the HW
>>>>> so even if the RX queue offload was enabled, no invalid packets will arrive
>> to
>>>> the application.
>>>>>
>>>>> In other case lets assume that the application wants all valid packets to
>> jump
>>>> to the next group,
>>>>> and all the reset of the packets will go to SW. (we can assume that later in
>>>> the pipeline also valid packets
>>>>> will arrive to the application)
>>>>> Flow create 0 ingress pattern integrity spec packet_ok = 1 mask packet_ok
>> =
>>>> 1 / end actions jump group 1 / end
>>>>> Flow create 0 priority 1 pattern eth / end actions rss / end
>>>>>
>>>>> In this case if the application enabled the RX offload then if the application
>>>> will receive invalid packet
>>>>> also the flag in the mbuf will be set.
>>>>>
>>>>
>>>> If application not enabled the Rx offload that means HW checks are not
>>>> enabled,
>>>> so HW can't know if a packet is OK or not, right.
>>>> In that case, for above rule, I expect none of the packets to match, so none
>>>> will jump to next group. Are we in the same page here?
>>>>
>>>> Or do you expect above rule configure the HW to enable the relevant HW
>>>> checks first?
>>>>
>>> If this is required by HW then yes.
>>> please see my answer above.
>>>
>>>>> As you can see those two offloads mode are complementary to each other
>>>> and one doesn't force the other one in any
>>>>> way.
>>>>>
>>>>> I hope this is clearer.
>>>>>
>>>>>
>>>>>>>>
>>>>>>>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>>>>>>>> ---
>>>>>>>>> v2: fix compilation error
>>>>>>>>> ---
>>>>>>>>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++++
>>>>>>>>> lib/librte_ethdev/rte_flow.h | 47
>>>>>> ++++++++++++++++++++++++++++++
>>>>>>>>> 2 files changed, 66 insertions(+)
>>>>>>>>>
>>>>>>>>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>>>>>>>> b/doc/guides/prog_guide/rte_flow.rst
>>>>>>>>> index e1b93ecedf..87ef591405 100644
>>>>>>>>> --- a/doc/guides/prog_guide/rte_flow.rst
>>>>>>>>> +++ b/doc/guides/prog_guide/rte_flow.rst
>>>>>>>>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>>>>>>>>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>>>>>>>>> - Default ``mask`` matches nothing, for all eCPRI messages.
>>>>>>>>>
>>>>>>>>> +Item: ``PACKET_INTEGRITY_CHECKS``
>>>>>>>>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>>>>>>>> +
>>>>>>>>> +Matches packet integrity.
>>>>>>>>> +
>>>>>>>>> +- ``level``: the encapsulation level that should be checked. level 0
>>>> means
>>>>>> the
>>>>>>>>> + default PMD mode (Can be inner most / outermost). value of 1
>> means
>>>>>>>> outermost
>>>>>>>>> + and higher value means inner header. See also RSS level.
>>>>>>>>> +- ``packet_ok``: All HW packet integrity checks have passed based on
>>>> the
>>>>>>>> max
>>>>>>>>> + layer of the packet.
>>>>>>>>> + layer of the packet.
>>>>>>>>> +- ``l2_ok``: all layer 2 HW integrity checks passed.
>>>>>>>>> +- ``l3_ok``: all layer 3 HW integrity checks passed.
>>>>>>>>> +- ``l4_ok``: all layer 3 HW integrity checks passed.
>>>>>>>>
>>>>>>>> s/layer 3/ layer 4/
>>>>>>>>
>>>>>>> Will fix.
>>>>>>>
>>>>>>>>> +- ``l2_crc_ok``: layer 2 crc check passed.
>>>>>>>>> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
>>>>>>>>> +- ``l4_csum_ok``: layer 4 checksum check passed.
>>>>>>>>> +- ``l3_len_ok``: the layer 3 len is smaller than the packet len.
>>>>>>>>> +
>>>>>>>>> Actions
>>>>>>>>> ~~~~~~~
>>>>>>>>>
>>>>>>>>> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
>>>>>>>>> index 6cc57136ac..77471af2c4 100644
>>>>>>>>> --- a/lib/librte_ethdev/rte_flow.h
>>>>>>>>> +++ b/lib/librte_ethdev/rte_flow.h
>>>>>>>>> @@ -551,6 +551,15 @@ enum rte_flow_item_type {
>>>>>>>>> * See struct rte_flow_item_geneve_opt
>>>>>>>>> */
>>>>>>>>> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
>>>>>>>>> +
>>>>>>>>> + /**
>>>>>>>>> + * [META]
>>>>>>>>> + *
>>>>>>>>> + * Matches on packet integrity.
>>>>>>>>> + *
>>>>>>>>> + * See struct rte_flow_item_integrity.
>>>>>>>>> + */
>>>>>>>>> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
>>>>>>>>> };
>>>>>>>>>
>>>>>>>>> /**
>>>>>>>>> @@ -1685,6 +1694,44 @@ rte_flow_item_geneve_opt_mask = {
>>>>>>>>> };
>>>>>>>>> #endif
>>>>>>>>>
>>>>>>>>> +__extension__
>>>>>>>>> +struct rte_flow_item_integrity {
>>>>>>>>> + uint32_t level;
>>>>>>>>> + /**< Packet encapsulation level the item should apply to.
>>>>>>>>> + * @see rte_flow_action_rss
>>>>>>>>> + */
>>>>>>>>> + union {
>>>>>>>>> + struct {
>>>>>>>>> + uint64_t packet_ok:1;
>>>>>>>>> + /** The packet is valid after passing all HW
>>>> checks. */
>>>>>>>>> + uint64_t l2_ok:1;
>>>>>>>>> + /**< L2 layer is valid after passing all HW
>>>> checks. */
>>>>>>>>> + uint64_t l3_ok:1;
>>>>>>>>> + /**< L3 layer is valid after passing all HW
>>>> checks. */
>>>>>>>>> + uint64_t l4_ok:1;
>>>>>>>>> + /**< L4 layer is valid after passing all HW
>>>> checks. */
>>>>>>>>> + uint64_t l2_crc_ok:1;
>>>>>>>>> + /**< L2 layer checksum is valid. */
>>>>>>>>> + uint64_t ipv4_csum_ok:1;
>>>>>>>>> + /**< L3 layer checksum is valid. */
>>>>>>>>> + uint64_t l4_csum_ok:1;
>>>>>>>>> + /**< L4 layer checksum is valid. */
>>>>>>>>> + uint64_t l3_len_ok:1;
>>>>>>>>> + /**< The l3 len is smaller than the packet len.
>>>> */
>>>>>>>>
>>>>>>>> packet len?
>>>>>>>>
>>>>>>> Do you mean replace the l3_len_ok with packet len?
>>>>>>
>>>>>> no, I was trying to ask what is "packet len" here? frame length, or mbuf
>>>> buffer
>>>>>> length, or something else?
>>>>>>
>>>>> Frame length.
>>>>>
>>>>>>> My only issue is that the check is comparing the l3 len to the packet len.
>>>>>>>
>>>>>>> If you still think it is better to call it packet len, I'm also O.K with it.
>>>>>>>
>>>>>>>>> + uint64_t reserved:56;
>>>>>>>>> + };
>>>>>>>>> + uint64_t value;
>>>>>>>>> + };
>>>>>>>>> +};
>>>>>>>>> +
>>>>>>>>> +#ifndef __cplusplus
>>>>>>>>> +static const struct rte_flow_item_integrity
>>>>>>>>> +rte_flow_item_integrity_mask = {
>>>>>>>>> + .level = 0,
>>>>>>>>> + .value = 0,
>>>>>>>>> +};
>>>>>>>>> +#endif
>>>>>>>>> +
>>>>>>>>> /**
>>>>>>>>> * Matching pattern item definition.
>>>>>>>>> *
>>>>>>>>>
>>>>>>>
>>>>>
>>>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v4 0/2] add packet integrity checks
2021-04-07 10:32 ` Ori Kam
2021-04-07 11:01 ` Jerin Jacob
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 0/2] " Gregory Etelson
@ 2021-04-14 12:56 ` Gregory Etelson
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 1/2] ethdev: " Gregory Etelson
2021-04-14 12:57 ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
` (3 subsequent siblings)
6 siblings, 2 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-14 12:56 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
v4:
- update API documentation
- update testpmd documentation
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 19 ++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 ++++++++++++
lib/librte_ethdev/rte_flow.h | 48 +++++++++++++++++++++
5 files changed, 139 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v4 1/2] ethdev: add packet integrity checks
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 0/2] add packet integrity checks Gregory Etelson
@ 2021-04-14 12:56 ` Gregory Etelson
2021-04-14 13:27 ` Ferruh Yigit
2021-04-14 12:57 ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add support for integrity item Gregory Etelson
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-14 12:56 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) a, and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to considure the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
l3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
frame len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
lib/librte_ethdev/rte_flow.h | 48 ++++++++++++++++++++++++++
3 files changed, 72 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..4b8723b84c 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,25 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+Some devices require pre-enabling for this item before using it.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 4 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index a0b907994a..986f749384 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -168,6 +168,11 @@ New Features
the events across multiple stages.
* This also reduced the scheduling overhead on a event device.
+* **Added packet integrity match to RTE flow rules.**
+
+ * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
+ * Added ``rte_flow_item_integrity`` data structure.
+
* **Updated testpmd.**
* Added a command line option to configure forced speed for Ethernet port.
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index c476a0f59d..aa50a8d2bf 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,16 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ * Some devices require pre-enabling for this item before using it.
+ *
+ * See struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1695,44 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+__extension__
+struct rte_flow_item_integrity {
+ uint32_t level;
+ /**< Packet encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ union {
+ struct {
+ uint64_t packet_ok:1;
+ /** The packet is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l2_crc_ok:1;
+ /**< L2 layer crc is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< IPv4 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l3_len_ok:1;
+ /**< The l3 len is smaller than the frame len. */
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v4 2/2] app/testpmd: add support for integrity item
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 0/2] add packet integrity checks Gregory Etelson
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 1/2] ethdev: " Gregory Etelson
@ 2021-04-14 12:57 ` Gregory Etelson
1 sibling, 0 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-14 12:57 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 +++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..b5dec34325 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -289,6 +289,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -956,6 +959,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 36f0a328a5..58d4d712ab 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3783,6 +3783,13 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``integrity``: match packet integrity.
+
+ - ``level {unsigned}``: Packet encapsulation level the item should
+ apply to. See rte_flow_action_rss for details.
+ - ``value {unsigned}``: A bitmask that specify what packet elements
+ must be matched for integrity.
+
Actions list
^^^^^^^^^^^^
@@ -4917,6 +4924,27 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample integrity rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Integrity rules can be created by the following commands:
+
+Integrity rule that forwards valid TCP packets to group 1.
+TCP packet integrity is matched with the ``l4_ok`` bit 3.
+
+::
+
+ testpmd> flow create 0 ingress
+ pattern eth / ipv4 / tcp / integrity value mask 8 value spec 8 / end
+ actions jump group 1 / end
+
+Integrity rule that forwards invalid packets to application.
+General packet integrity is matched with the ``packet_ok`` bit 0.
+
+::
+
+ testpmd> flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
+
BPF Functions
--------------
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/2] ethdev: add packet integrity checks
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 1/2] ethdev: " Gregory Etelson
@ 2021-04-14 13:27 ` Ferruh Yigit
2021-04-14 13:31 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-14 13:27 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, jerinjacobk,
olivier.matz, thomas, viacheslavo, matan, rasland
On 4/14/2021 1:56 PM, Gregory Etelson wrote:
> From: Ori Kam <orika@nvidia.com>
>
> Currently, DPDK application can offload the checksum check,
> and report it in the mbuf.
>
> However, as more and more applications are offloading some or all
> logic and action to the HW, there is a need to check the packet
> integrity so the right decision can be taken.
>
> The application logic can be positive meaning if the packet is
> valid jump / do actions, or negative if packet is not valid
> jump to SW / do actions (like drop) a, and add default flow
> (match all in low priority) that will direct the miss packet
> to the miss path.
>
> Since currently rte_flow works in positive way the assumption is
> that the positive way will be the common way in this case also.
>
> When thinking what is the best API to implement such feature,
> we need to considure the following (in no specific order):
> 1. API breakage.
> 2. Simplicity.
> 3. Performance.
> 4. HW capabilities.
> 5. rte_flow limitation.
> 6. Flexibility.
>
> First option: Add integrity flags to each of the items.
> For example add checksum_ok to ipv4 item.
>
> Pros:
> 1. No new rte_flow item.
> 2. Simple in the way that on each item the app can see
> what checks are available.
>
> Cons:
> 1. API breakage.
> 2. increase number of flows, since app can't add global rule and
> must have dedicated flow for each of the flow combinations, for example
> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> result in 5 flows.
>
> Second option: dedicated item
>
> Pros:
> 1. No API breakage, and there will be no for some time due to having
> extra space. (by using bits)
> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> IPv6.
> 3. Simplicity application can just look at one place to see all possible
> checks.
> 4. Allow future support for more tests.
>
> Cons:
> 1. New item, that holds number of fields from different items.
>
> For starter the following bits are suggested:
> 1. packet_ok - means that all HW checks depending on packet layer have
> passed. This may mean that in some HW such flow should be splited to
> number of flows or fail.
> 2. l2_ok - all check for layer 2 have passed.
> 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> l3 layer this check should fail.
> 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> have l4 layer this check should fail.
> 5. l2_crc_ok - the layer 2 crc is O.K.
> 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> possible that checksum will be 0 and the l3_ok will be 1.
> 7. l4_csum_ok - layer 4 checksum is O.K.
> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> frame len.
>
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++
> doc/guides/rel_notes/release_21_05.rst | 5 +++
> lib/librte_ethdev/rte_flow.h | 48 ++++++++++++++++++++++++++
> 3 files changed, 72 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index e1b93ecedf..4b8723b84c 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +Some devices require pre-enabling for this item before using it.
> +
"pre-enabling" may not be clear enough, what about updating it slightly:
"Some devices require enabling integration checks in HW before using this flow
item."
For the record, the intention here is to highlight that if the requested
integration check is not enabled in HW, creating flow rule will fail.
Application may need to enable the integration check in HW first.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v4 1/2] ethdev: add packet integrity checks
2021-04-14 13:27 ` Ferruh Yigit
@ 2021-04-14 13:31 ` Ferruh Yigit
0 siblings, 0 replies; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-14 13:31 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, jerinjacobk,
olivier.matz, thomas, viacheslavo, matan, rasland
On 4/14/2021 2:27 PM, Ferruh Yigit wrote:
> On 4/14/2021 1:56 PM, Gregory Etelson wrote:
>> From: Ori Kam <orika@nvidia.com>
>>
>> Currently, DPDK application can offload the checksum check,
>> and report it in the mbuf.
>>
>> However, as more and more applications are offloading some or all
>> logic and action to the HW, there is a need to check the packet
>> integrity so the right decision can be taken.
>>
>> The application logic can be positive meaning if the packet is
>> valid jump / do actions, or negative if packet is not valid
>> jump to SW / do actions (like drop) a, and add default flow
>> (match all in low priority) that will direct the miss packet
>> to the miss path.
>>
>> Since currently rte_flow works in positive way the assumption is
>> that the positive way will be the common way in this case also.
>>
>> When thinking what is the best API to implement such feature,
>> we need to considure the following (in no specific order):
>> 1. API breakage.
>> 2. Simplicity.
>> 3. Performance.
>> 4. HW capabilities.
>> 5. rte_flow limitation.
>> 6. Flexibility.
>>
>> First option: Add integrity flags to each of the items.
>> For example add checksum_ok to ipv4 item.
>>
>> Pros:
>> 1. No new rte_flow item.
>> 2. Simple in the way that on each item the app can see
>> what checks are available.
>>
>> Cons:
>> 1. API breakage.
>> 2. increase number of flows, since app can't add global rule and
>> must have dedicated flow for each of the flow combinations, for example
>> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
>> result in 5 flows.
>>
>> Second option: dedicated item
>>
>> Pros:
>> 1. No API breakage, and there will be no for some time due to having
>> extra space. (by using bits)
>> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
>> IPv6.
>> 3. Simplicity application can just look at one place to see all possible
>> checks.
>> 4. Allow future support for more tests.
>>
>> Cons:
>> 1. New item, that holds number of fields from different items.
>>
>> For starter the following bits are suggested:
>> 1. packet_ok - means that all HW checks depending on packet layer have
>> passed. This may mean that in some HW such flow should be splited to
>> number of flows or fail.
>> 2. l2_ok - all check for layer 2 have passed.
>> 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
>> l3 layer this check should fail.
>> 4. l4_ok - all check for layer 4 have passed. If packet doesn't
>> have l4 layer this check should fail.
>> 5. l2_crc_ok - the layer 2 crc is O.K.
>> 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
>> IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
>> possible that checksum will be 0 and the l3_ok will be 1.
>> 7. l4_csum_ok - layer 4 checksum is O.K.
>> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
>> frame len.
>>
>> Example of usage:
>> 1. check packets from all possible layers for integrity.
>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>
>> 2. Check only packet with layer 4 (UDP / TCP)
>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>
>> Signed-off-by: Ori Kam <orika@nvidia.com>
>> ---
>> doc/guides/prog_guide/rte_flow.rst | 19 ++++++++++
>> doc/guides/rel_notes/release_21_05.rst | 5 +++
>> lib/librte_ethdev/rte_flow.h | 48 ++++++++++++++++++++++++++
>> 3 files changed, 72 insertions(+)
>>
>> diff --git a/doc/guides/prog_guide/rte_flow.rst
>> b/doc/guides/prog_guide/rte_flow.rst
>> index e1b93ecedf..4b8723b84c 100644
>> --- a/doc/guides/prog_guide/rte_flow.rst
>> +++ b/doc/guides/prog_guide/rte_flow.rst
>> @@ -1398,6 +1398,25 @@ Matches a eCPRI header.
>> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
>> - Default ``mask`` matches nothing, for all eCPRI messages.
>> +Item: ``PACKET_INTEGRITY_CHECKS``
>> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>> +
>> +Matches packet integrity.
>> +Some devices require pre-enabling for this item before using it.
>> +
>
> "pre-enabling" may not be clear enough, what about updating it slightly:
>
> "Some devices require enabling integration checks in HW before using this flow
> item."
>
Indeed even with above it is not clear who should do the enabling, PMD or
application, let me try again:
"For some devices application needs to enable integration checks in HW before
using this flow item."
> For the record, the intention here is to highlight that if the requested
> integration check is not enabled in HW, creating flow rule will fail.
> Application may need to enable the integration check in HW first.
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v5 0/2] add packet integrity checks
2021-04-07 10:32 ` Ori Kam
` (2 preceding siblings ...)
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 0/2] add packet integrity checks Gregory Etelson
@ 2021-04-14 16:09 ` Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
` (2 more replies)
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
` (2 subsequent siblings)
6 siblings, 3 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-14 16:09 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
v5: update API documentation
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 ++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 ++++++++++++
lib/librte_ethdev/rte_flow.h | 49 +++++++++++++++++++++
5 files changed, 141 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
@ 2021-04-14 16:09 ` Gregory Etelson
2021-04-14 17:24 ` Ajit Khaparde
2021-04-15 16:46 ` Thomas Monjalon
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 16:26 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Ferruh Yigit
2 siblings, 2 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-14 16:09 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) a, and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to considure the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
l3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
frame len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
3 files changed, 74 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..1dd2301a07 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,26 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+For some devices application needs to enable integration checks in HW
+before using this item.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 4 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index a0b907994a..986f749384 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -168,6 +168,11 @@ New Features
the events across multiple stages.
* This also reduced the scheduling overhead on a event device.
+* **Added packet integrity match to RTE flow rules.**
+
+ * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
+ * Added ``rte_flow_item_integrity`` data structure.
+
* **Updated testpmd.**
* Added a command line option to configure forced speed for Ethernet port.
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index c476a0f59d..446ff48140 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,17 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ * For some devices application needs to enable integration checks in HW
+ * before using this item.
+ *
+ * See struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1696,44 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+__extension__
+struct rte_flow_item_integrity {
+ uint32_t level;
+ /**< Packet encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ union {
+ struct {
+ uint64_t packet_ok:1;
+ /** The packet is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l2_crc_ok:1;
+ /**< L2 layer crc is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< IPv4 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l3_len_ok:1;
+ /**< The l3 len is smaller than the frame len. */
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v5 2/2] app/testpmd: add support for integrity item
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
@ 2021-04-14 16:09 ` Gregory Etelson
2021-04-14 16:26 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Ferruh Yigit
2 siblings, 0 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-14 16:09 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 +++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index fb7a3a8bd3..b5dec34325 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -289,6 +289,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -956,6 +959,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1307,6 +1311,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3373,6 +3390,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 36f0a328a5..58d4d712ab 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3783,6 +3783,13 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``integrity``: match packet integrity.
+
+ - ``level {unsigned}``: Packet encapsulation level the item should
+ apply to. See rte_flow_action_rss for details.
+ - ``value {unsigned}``: A bitmask that specify what packet elements
+ must be matched for integrity.
+
Actions list
^^^^^^^^^^^^
@@ -4917,6 +4924,27 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample integrity rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Integrity rules can be created by the following commands:
+
+Integrity rule that forwards valid TCP packets to group 1.
+TCP packet integrity is matched with the ``l4_ok`` bit 3.
+
+::
+
+ testpmd> flow create 0 ingress
+ pattern eth / ipv4 / tcp / integrity value mask 8 value spec 8 / end
+ actions jump group 1 / end
+
+Integrity rule that forwards invalid packets to application.
+General packet integrity is matched with the ``packet_ok`` bit 0.
+
+::
+
+ testpmd> flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
+
BPF Functions
--------------
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 0/2] add packet integrity checks
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add support for integrity item Gregory Etelson
@ 2021-04-14 16:26 ` Ferruh Yigit
2 siblings, 0 replies; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-14 16:26 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, jerinjacobk,
olivier.matz, thomas, viacheslavo, matan, rasland
On 4/14/2021 5:09 PM, Gregory Etelson wrote:
> v5: update API documentation
>
> Ori Kam (2):
> ethdev: add packet integrity checks
> app/testpmd: add support for integrity item
>
For series,
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
@ 2021-04-14 17:24 ` Ajit Khaparde
2021-04-15 15:10 ` Ori Kam
2021-04-15 16:46 ` Thomas Monjalon
1 sibling, 1 reply; 68+ messages in thread
From: Ajit Khaparde @ 2021-04-14 17:24 UTC (permalink / raw)
To: Gregory Etelson
Cc: Ori Kam, Andrew Rybchenko, dpdk-dev, Ferruh Yigit,
Jerin Jacob Kollanukkaran, Jerin Jacob, Olivier Matz,
Thomas Monjalon, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
[-- Attachment #1: Type: text/plain, Size: 7820 bytes --]
On Wed, Apr 14, 2021 at 9:10 AM Gregory Etelson <getelson@nvidia.com> wrote:
>
> From: Ori Kam <orika@nvidia.com>
>
> Currently, DPDK application can offload the checksum check,
> and report it in the mbuf.
>
> However, as more and more applications are offloading some or all
> logic and action to the HW, there is a need to check the packet
> integrity so the right decision can be taken.
>
> The application logic can be positive meaning if the packet is
> valid jump / do actions, or negative if packet is not valid
> jump to SW / do actions (like drop) a, and add default flow
> (match all in low priority) that will direct the miss packet
> to the miss path.
Unless I missed it,
How do you specify the negative case?
Can you provide an example as well?
>
>
> Since currently rte_flow works in positive way the assumption is
> that the positive way will be the common way in this case also.
>
> When thinking what is the best API to implement such feature,
> we need to considure the following (in no specific order):
> 1. API breakage.
> 2. Simplicity.
> 3. Performance.
> 4. HW capabilities.
> 5. rte_flow limitation.
> 6. Flexibility.
>
> First option: Add integrity flags to each of the items.
> For example add checksum_ok to ipv4 item.
>
> Pros:
> 1. No new rte_flow item.
> 2. Simple in the way that on each item the app can see
> what checks are available.
>
> Cons:
> 1. API breakage.
> 2. increase number of flows, since app can't add global rule and
> must have dedicated flow for each of the flow combinations, for example
> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> result in 5 flows.
>
> Second option: dedicated item
>
> Pros:
> 1. No API breakage, and there will be no for some time due to having
> extra space. (by using bits)
> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> IPv6.
> 3. Simplicity application can just look at one place to see all possible
> checks.
> 4. Allow future support for more tests.
>
> Cons:
> 1. New item, that holds number of fields from different items.
>
> For starter the following bits are suggested:
> 1. packet_ok - means that all HW checks depending on packet layer have
> passed. This may mean that in some HW such flow should be splited to
> number of flows or fail.
> 2. l2_ok - all check for layer 2 have passed.
> 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> l3 layer this check should fail.
> 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> have l4 layer this check should fail.
> 5. l2_crc_ok - the layer 2 crc is O.K.
> 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> possible that checksum will be 0 and the l3_ok will be 1.
> 7. l4_csum_ok - layer 4 checksum is O.K.
> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> frame len.
>
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> doc/guides/rel_notes/release_21_05.rst | 5 +++
> lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
> 3 files changed, 74 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index e1b93ecedf..1dd2301a07 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,26 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +For some devices application needs to enable integration checks in HW
> +before using this item.
> +
> +- ``level``: the encapsulation level that should be checked. level 0 means the
> + default PMD mode (Can be inner most / outermost). value of 1 means outermost
> + and higher value means inner header. See also RSS level.
> +- ``packet_ok``: All HW packet integrity checks have passed based on the max
> + layer of the packet.
> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> +- ``l4_ok``: all layer 4 HW integrity checks passed.
> +- ``l2_crc_ok``: layer 2 crc check passed.
> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> +- ``l4_csum_ok``: layer 4 checksum check passed.
> +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
> +
> Actions
> ~~~~~~~
>
> diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
> index a0b907994a..986f749384 100644
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -168,6 +168,11 @@ New Features
> the events across multiple stages.
> * This also reduced the scheduling overhead on a event device.
>
> +* **Added packet integrity match to RTE flow rules.**
> +
> + * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
> + * Added ``rte_flow_item_integrity`` data structure.
> +
> * **Updated testpmd.**
>
> * Added a command line option to configure forced speed for Ethernet port.
> diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> index c476a0f59d..446ff48140 100644
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,17 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches on packet integrity.
> + * For some devices application needs to enable integration checks in HW
> + * before using this item.
> + *
> + * See struct rte_flow_item_integrity.
> + */
> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> };
>
> /**
> @@ -1685,6 +1696,44 @@ rte_flow_item_geneve_opt_mask = {
> };
> #endif
>
> +__extension__
> +struct rte_flow_item_integrity {
> + uint32_t level;
> + /**< Packet encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
> + union {
> + struct {
> + uint64_t packet_ok:1;
> + /** The packet is valid after passing all HW checks. */
> + uint64_t l2_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l2_crc_ok:1;
> + /**< L2 layer crc is valid. */
> + uint64_t ipv4_csum_ok:1;
> + /**< IPv4 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l3_len_ok:1;
> + /**< The l3 len is smaller than the frame len. */
> + uint64_t reserved:56;
> + };
> + uint64_t value;
> + };
> +};
> +
> +#ifndef __cplusplus
> +static const struct rte_flow_item_integrity
> +rte_flow_item_integrity_mask = {
> + .level = 0,
> + .value = 0,
> +};
> +#endif
> +
> /**
> * Matching pattern item definition.
> *
> --
> 2.25.1
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-14 17:24 ` Ajit Khaparde
@ 2021-04-15 15:10 ` Ori Kam
2021-04-15 15:25 ` Ajit Khaparde
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-15 15:10 UTC (permalink / raw)
To: Ajit Khaparde, Gregory Etelson
Cc: Andrew Rybchenko, dpdk-dev, Ferruh Yigit,
Jerin Jacob Kollanukkaran, Jerin Jacob, Olivier Matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
Hi Ajit,
> -----Original Message-----
> From: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Subject: Re: [PATCH v5 1/2] ethdev: add packet integrity checks
>
> On Wed, Apr 14, 2021 at 9:10 AM Gregory Etelson <getelson@nvidia.com>
> wrote:
> >
> > From: Ori Kam <orika@nvidia.com>
> >
> > Currently, DPDK application can offload the checksum check,
> > and report it in the mbuf.
> >
> > However, as more and more applications are offloading some or all
> > logic and action to the HW, there is a need to check the packet
> > integrity so the right decision can be taken.
> >
> > The application logic can be positive meaning if the packet is
> > valid jump / do actions, or negative if packet is not valid
> > jump to SW / do actions (like drop) a, and add default flow
> > (match all in low priority) that will direct the miss packet
> > to the miss path.
>
> Unless I missed it,
> How do you specify the negative case?
> Can you provide an example as well?
>
You can use negative case by setting the bit to zero and the mask bit to 1:
This example was taken from the testpmd patch:
flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
it matches all invalid packets and forward it to the application.
> >
> >
> > Since currently rte_flow works in positive way the assumption is
> > that the positive way will be the common way in this case also.
> >
> > When thinking what is the best API to implement such feature,
> > we need to considure the following (in no specific order):
> > 1. API breakage.
> > 2. Simplicity.
> > 3. Performance.
> > 4. HW capabilities.
> > 5. rte_flow limitation.
> > 6. Flexibility.
> >
> > First option: Add integrity flags to each of the items.
> > For example add checksum_ok to ipv4 item.
> >
> > Pros:
> > 1. No new rte_flow item.
> > 2. Simple in the way that on each item the app can see
> > what checks are available.
> >
> > Cons:
> > 1. API breakage.
> > 2. increase number of flows, since app can't add global rule and
> > must have dedicated flow for each of the flow combinations, for example
> > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > result in 5 flows.
> >
> > Second option: dedicated item
> >
> > Pros:
> > 1. No API breakage, and there will be no for some time due to having
> > extra space. (by using bits)
> > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > IPv6.
> > 3. Simplicity application can just look at one place to see all possible
> > checks.
> > 4. Allow future support for more tests.
> >
> > Cons:
> > 1. New item, that holds number of fields from different items.
> >
> > For starter the following bits are suggested:
> > 1. packet_ok - means that all HW checks depending on packet layer have
> > passed. This may mean that in some HW such flow should be splited to
> > number of flows or fail.
> > 2. l2_ok - all check for layer 2 have passed.
> > 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> > l3 layer this check should fail.
> > 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> > have l4 layer this check should fail.
> > 5. l2_crc_ok - the layer 2 crc is O.K.
> > 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> > IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> > possible that checksum will be 0 and the l3_ok will be 1.
> > 7. l4_csum_ok - layer 4 checksum is O.K.
> > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > frame len.
> >
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> > doc/guides/rel_notes/release_21_05.rst | 5 +++
> > lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
> > 3 files changed, 74 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index e1b93ecedf..1dd2301a07 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,26 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``PACKET_INTEGRITY_CHECKS``
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches packet integrity.
> > +For some devices application needs to enable integration checks in HW
> > +before using this item.
> > +
> > +- ``level``: the encapsulation level that should be checked. level 0 means
> the
> > + default PMD mode (Can be inner most / outermost). value of 1 means
> outermost
> > + and higher value means inner header. See also RSS level.
> > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> max
> > + layer of the packet.
> > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > +- ``l4_ok``: all layer 4 HW integrity checks passed.
> > +- ``l2_crc_ok``: layer 2 crc check passed.
> > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
> > +
> > Actions
> > ~~~~~~~
> >
> > diff --git a/doc/guides/rel_notes/release_21_05.rst
> b/doc/guides/rel_notes/release_21_05.rst
> > index a0b907994a..986f749384 100644
> > --- a/doc/guides/rel_notes/release_21_05.rst
> > +++ b/doc/guides/rel_notes/release_21_05.rst
> > @@ -168,6 +168,11 @@ New Features
> > the events across multiple stages.
> > * This also reduced the scheduling overhead on a event device.
> >
> > +* **Added packet integrity match to RTE flow rules.**
> > +
> > + * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
> > + * Added ``rte_flow_item_integrity`` data structure.
> > +
> > * **Updated testpmd.**
> >
> > * Added a command line option to configure forced speed for Ethernet
> port.
> > diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
> > index c476a0f59d..446ff48140 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,17 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches on packet integrity.
> > + * For some devices application needs to enable integration checks in
> HW
> > + * before using this item.
> > + *
> > + * See struct rte_flow_item_integrity.
> > + */
> > + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> > };
> >
> > /**
> > @@ -1685,6 +1696,44 @@ rte_flow_item_geneve_opt_mask = {
> > };
> > #endif
> >
> > +__extension__
> > +struct rte_flow_item_integrity {
> > + uint32_t level;
> > + /**< Packet encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
> > + union {
> > + struct {
> > + uint64_t packet_ok:1;
> > + /** The packet is valid after passing all HW checks. */
> > + uint64_t l2_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l2_crc_ok:1;
> > + /**< L2 layer crc is valid. */
> > + uint64_t ipv4_csum_ok:1;
> > + /**< IPv4 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l3_len_ok:1;
> > + /**< The l3 len is smaller than the frame len. */
> > + uint64_t reserved:56;
> > + };
> > + uint64_t value;
> > + };
> > +};
> > +
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_integrity
> > +rte_flow_item_integrity_mask = {
> > + .level = 0,
> > + .value = 0,
> > +};
> > +#endif
> > +
> > /**
> > * Matching pattern item definition.
> > *
> > --
> > 2.25.1
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-15 15:10 ` Ori Kam
@ 2021-04-15 15:25 ` Ajit Khaparde
0 siblings, 0 replies; 68+ messages in thread
From: Ajit Khaparde @ 2021-04-15 15:25 UTC (permalink / raw)
To: Ori Kam
Cc: Gregory Etelson, Andrew Rybchenko, dpdk-dev, Ferruh Yigit,
Jerin Jacob Kollanukkaran, Jerin Jacob, Olivier Matz,
NBU-Contact-Thomas Monjalon, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
[-- Attachment #1: Type: text/plain, Size: 1275 bytes --]
> > > However, as more and more applications are offloading some or all
> > > logic and action to the HW, there is a need to check the packet
> > > integrity so the right decision can be taken.
> > >
> > > The application logic can be positive meaning if the packet is
> > > valid jump / do actions, or negative if packet is not valid
> > > jump to SW / do actions (like drop) a, and add default flow
> > > (match all in low priority) that will direct the miss packet
> > > to the miss path.
> >
> > Unless I missed it,
> > How do you specify the negative case?
> > Can you provide an example as well?
> >
> You can use negative case by setting the bit to zero and the mask bit to 1:
> This example was taken from the testpmd patch:
> flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
> it matches all invalid packets and forward it to the application.
Thanks Ori.
> > >
> > > Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> > > ---
> > > doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> > > doc/guides/rel_notes/release_21_05.rst | 5 +++
> > > lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
> > > 3 files changed, 74 insertions(+)
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
2021-04-14 17:24 ` Ajit Khaparde
@ 2021-04-15 16:46 ` Thomas Monjalon
2021-04-16 7:43 ` Ori Kam
1 sibling, 1 reply; 68+ messages in thread
From: Thomas Monjalon @ 2021-04-15 16:46 UTC (permalink / raw)
To: orika, Gregory Etelson
Cc: dev, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, viacheslavo, matan, rasland
14/04/2021 18:09, Gregory Etelson:
> From: Ori Kam <orika@nvidia.com>
>
> Currently, DPDK application can offload the checksum check,
> and report it in the mbuf.
>
> However, as more and more applications are offloading some or all
> logic and action to the HW, there is a need to check the packet
> integrity so the right decision can be taken.
>
> The application logic can be positive meaning if the packet is
> valid jump / do actions, or negative if packet is not valid
> jump to SW / do actions (like drop) a, and add default flow
There is a typo here. What should it be?
> (match all in low priority) that will direct the miss packet
> to the miss path.
>
> Since currently rte_flow works in positive way the assumption is
> that the positive way will be the common way in this case also.
>
> When thinking what is the best API to implement such feature,
> we need to considure the following (in no specific order):
s/considure/consider/
> 1. API breakage.
> 2. Simplicity.
> 3. Performance.
> 4. HW capabilities.
> 5. rte_flow limitation.
> 6. Flexibility.
>
> First option: Add integrity flags to each of the items.
> For example add checksum_ok to ipv4 item.
>
> Pros:
> 1. No new rte_flow item.
> 2. Simple in the way that on each item the app can see
> what checks are available.
>
> Cons:
> 1. API breakage.
> 2. increase number of flows, since app can't add global rule and
> must have dedicated flow for each of the flow combinations, for example
> matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> result in 5 flows.
>
> Second option: dedicated item
>
> Pros:
> 1. No API breakage, and there will be no for some time due to having
> extra space. (by using bits)
> 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> IPv6.
> 3. Simplicity application can just look at one place to see all possible
> checks.
> 4. Allow future support for more tests.
>
> Cons:
> 1. New item, that holds number of fields from different items.
>
> For starter the following bits are suggested:
> 1. packet_ok - means that all HW checks depending on packet layer have
> passed. This may mean that in some HW such flow should be splited to
> number of flows or fail.
> 2. l2_ok - all check for layer 2 have passed.
> 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> l3 layer this check should fail.
> 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> have l4 layer this check should fail.
> 5. l2_crc_ok - the layer 2 crc is O.K.
> 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> possible that checksum will be 0 and the l3_ok will be 1.
> 7. l4_csum_ok - layer 4 checksum is O.K.
> 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> frame len.
>
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> doc/guides/rel_notes/release_21_05.rst | 5 +++
> lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
> 3 files changed, 74 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index e1b93ecedf..1dd2301a07 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,26 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +For some devices application needs to enable integration checks in HW
> +before using this item.
> +
> +- ``level``: the encapsulation level that should be checked. level 0 means the
> + default PMD mode (Can be inner most / outermost). value of 1 means outermost
> + and higher value means inner header. See also RSS level.
> +- ``packet_ok``: All HW packet integrity checks have passed based on the max
> + layer of the packet.
> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> +- ``l4_ok``: all layer 4 HW integrity checks passed.
> +- ``l2_crc_ok``: layer 2 crc check passed.
> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> +- ``l4_csum_ok``: layer 4 checksum check passed.
> +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
> +
> Actions
> ~~~~~~~
>
> diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
> index a0b907994a..986f749384 100644
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> @@ -168,6 +168,11 @@ New Features
> the events across multiple stages.
> * This also reduced the scheduling overhead on a event device.
>
> +* **Added packet integrity match to RTE flow rules.**
Please remove "RTE", it has no meaning. All in DPDK is "RTE".
> +
> + * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
It is RTE_FLOW_ITEM_TYPE_INTEGRITY
> + * Added ``rte_flow_item_integrity`` data structure.
> +
This text should be sorted before drivers.
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> @@ -551,6 +551,17 @@ enum rte_flow_item_type {
> * See struct rte_flow_item_geneve_opt
> */
> RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> +
> + /**
> + * [META]
> + *
> + * Matches on packet integrity.
> + * For some devices application needs to enable integration checks in HW
> + * before using this item.
That's a bit fuzzy.
Do you mean some driver-specific API may be required?
> + *
> + * See struct rte_flow_item_integrity.
> + */
> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> };
> +__extension__
Why extension here?
If this is because of the anonymous union,
it should be RTE_STD_C11 before the union.
Same for the struct.
> +struct rte_flow_item_integrity {
> + uint32_t level;
> + /**< Packet encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
Please insert comments before the struct member.
Instead of "Packet encapsulation", isn't it better understood as
"Tunnel encapsulation"? Not sure, please advise.
> + union {
> + struct {
> + uint64_t packet_ok:1;
> + /** The packet is valid after passing all HW checks. */
The doxygen syntax is missing < but it will be fine when moved before.
> + uint64_t l2_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l2_crc_ok:1;
> + /**< L2 layer crc is valid. */
s/crc/CRC/
> + uint64_t ipv4_csum_ok:1;
> + /**< IPv4 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l3_len_ok:1;
> + /**< The l3 len is smaller than the frame len. */
s/len/length/g
> + uint64_t reserved:56;
> + };
> + uint64_t value;
double space
> + };
> +};
> +
> +#ifndef __cplusplus
> +static const struct rte_flow_item_integrity
> +rte_flow_item_integrity_mask = {
> + .level = 0,
> + .value = 0,
> +};
> +#endif
I'm pretty sure it breaks with some C compilers.
Why not for C++?
I see we have it already in rte_flow.h so we can keep it,
but that's something to double check for a future fix.
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-15 16:46 ` Thomas Monjalon
@ 2021-04-16 7:43 ` Ori Kam
2021-04-18 8:15 ` Gregory Etelson
0 siblings, 1 reply; 68+ messages in thread
From: Ori Kam @ 2021-04-16 7:43 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon, Gregory Etelson
Cc: dev, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, April 15, 2021 7:46 PM
> Subject: Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
>
> 14/04/2021 18:09, Gregory Etelson:
> > From: Ori Kam <orika@nvidia.com>
> >
> > Currently, DPDK application can offload the checksum check,
> > and report it in the mbuf.
> >
> > However, as more and more applications are offloading some or all
> > logic and action to the HW, there is a need to check the packet
> > integrity so the right decision can be taken.
> >
> > The application logic can be positive meaning if the packet is
> > valid jump / do actions, or negative if packet is not valid
> > jump to SW / do actions (like drop) a, and add default flow
>
> There is a typo here. What should it be?
>
Simply remove the a.
> > (match all in low priority) that will direct the miss packet
> > to the miss path.
> >
> > Since currently rte_flow works in positive way the assumption is
> > that the positive way will be the common way in this case also.
> >
> > When thinking what is the best API to implement such feature,
> > we need to considure the following (in no specific order):
>
> s/considure/consider/
>
Will fix.
> > 1. API breakage.
> > 2. Simplicity.
> > 3. Performance.
> > 4. HW capabilities.
> > 5. rte_flow limitation.
> > 6. Flexibility.
> >
> > First option: Add integrity flags to each of the items.
> > For example add checksum_ok to ipv4 item.
> >
> > Pros:
> > 1. No new rte_flow item.
> > 2. Simple in the way that on each item the app can see
> > what checks are available.
> >
> > Cons:
> > 1. API breakage.
> > 2. increase number of flows, since app can't add global rule and
> > must have dedicated flow for each of the flow combinations, for example
> > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > result in 5 flows.
> >
> > Second option: dedicated item
> >
> > Pros:
> > 1. No API breakage, and there will be no for some time due to having
> > extra space. (by using bits)
> > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > IPv6.
> > 3. Simplicity application can just look at one place to see all possible
> > checks.
> > 4. Allow future support for more tests.
> >
> > Cons:
> > 1. New item, that holds number of fields from different items.
> >
> > For starter the following bits are suggested:
> > 1. packet_ok - means that all HW checks depending on packet layer have
> > passed. This may mean that in some HW such flow should be splited to
> > number of flows or fail.
> > 2. l2_ok - all check for layer 2 have passed.
> > 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> > l3 layer this check should fail.
> > 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> > have l4 layer this check should fail.
> > 5. l2_crc_ok - the layer 2 crc is O.K.
> > 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> > IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> > possible that checksum will be 0 and the l3_ok will be 1.
> > 7. l4_csum_ok - layer 4 checksum is O.K.
> > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > frame len.
> >
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> > doc/guides/rel_notes/release_21_05.rst | 5 +++
> > lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
> > 3 files changed, 74 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> b/doc/guides/prog_guide/rte_flow.rst
> > index e1b93ecedf..1dd2301a07 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -1398,6 +1398,26 @@ Matches a eCPRI header.
> > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > - Default ``mask`` matches nothing, for all eCPRI messages.
> >
> > +Item: ``PACKET_INTEGRITY_CHECKS``
> > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > +
> > +Matches packet integrity.
> > +For some devices application needs to enable integration checks in HW
> > +before using this item.
> > +
> > +- ``level``: the encapsulation level that should be checked. level 0 means the
> > + default PMD mode (Can be inner most / outermost). value of 1 means
> outermost
> > + and higher value means inner header. See also RSS level.
> > +- ``packet_ok``: All HW packet integrity checks have passed based on the
> max
> > + layer of the packet.
> > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > +- ``l4_ok``: all layer 4 HW integrity checks passed.
> > +- ``l2_crc_ok``: layer 2 crc check passed.
> > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
> > +
> > Actions
> > ~~~~~~~
> >
> > diff --git a/doc/guides/rel_notes/release_21_05.rst
> b/doc/guides/rel_notes/release_21_05.rst
> > index a0b907994a..986f749384 100644
> > --- a/doc/guides/rel_notes/release_21_05.rst
> > +++ b/doc/guides/rel_notes/release_21_05.rst
> > @@ -168,6 +168,11 @@ New Features
> > the events across multiple stages.
> > * This also reduced the scheduling overhead on a event device.
> >
> > +* **Added packet integrity match to RTE flow rules.**
>
> Please remove "RTE", it has no meaning. All in DPDK is "RTE".
>
Sure.
> > +
> > + * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
>
> It is RTE_FLOW_ITEM_TYPE_INTEGRITY
>
> > + * Added ``rte_flow_item_integrity`` data structure.
> > +
>
> This text should be sorted before drivers.
>
Sure.
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -551,6 +551,17 @@ enum rte_flow_item_type {
> > * See struct rte_flow_item_geneve_opt
> > */
> > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > +
> > + /**
> > + * [META]
> > + *
> > + * Matches on packet integrity.
> > + * For some devices application needs to enable integration checks in
> HW
> > + * before using this item.
>
> That's a bit fuzzy.
> Do you mean some driver-specific API may be required?
>
I know it is a bit fuzzy but it is really HW dependent,
for example in case of some drivers there is nothing to be done.
In other cases the application may need to enable the RX checksum
offload, other drivers may need this cap be enabled by HW configuration.
> > + *
> > + * See struct rte_flow_item_integrity.
> > + */
> > + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> > };
>
> > +__extension__
>
> Why extension here?
> If this is because of the anonymous union,
> it should be RTE_STD_C11 before the union.
> Same for the struct.
>
O.K
> > +struct rte_flow_item_integrity {
> > + uint32_t level;
> > + /**< Packet encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
>
> Please insert comments before the struct member.
>
O.K.
> Instead of "Packet encapsulation", isn't it better understood as
> "Tunnel encapsulation"? Not sure, please advise.
>
I have no strong feeling ether way, so I don't mind the change if you think it
is clearer.
> > + union {
> > + struct {
> > + uint64_t packet_ok:1;
> > + /** The packet is valid after passing all HW checks. */
>
> The doxygen syntax is missing < but it will be fine when moved before.
>
Sure.
> > + uint64_t l2_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l2_crc_ok:1;
> > + /**< L2 layer crc is valid. */
>
> s/crc/CRC/
>
O.K.
> > + uint64_t ipv4_csum_ok:1;
> > + /**< IPv4 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l3_len_ok:1;
> > + /**< The l3 len is smaller than the frame len. */
>
> s/len/length/g
>
O.K.
> > + uint64_t reserved:56;
> > + };
> > + uint64_t value;
>
> double space
>
Sure.
> > + };
> > +};
> > +
> > +#ifndef __cplusplus
> > +static const struct rte_flow_item_integrity
> > +rte_flow_item_integrity_mask = {
> > + .level = 0,
> > + .value = 0,
> > +};
> > +#endif
>
> I'm pretty sure it breaks with some C compilers.
> Why not for C++?
> I see we have it already in rte_flow.h so we can keep it,
> but that's something to double check for a future fix.
>
Just like you said this is the practice used already,
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-16 7:43 ` Ori Kam
@ 2021-04-18 8:15 ` Gregory Etelson
2021-04-18 18:00 ` Thomas Monjalon
0 siblings, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-18 8:15 UTC (permalink / raw)
To: Ori Kam, NBU-Contact-Thomas Monjalon
Cc: dev, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
Hello Thomas,
Please see my comment on the use of RTE_STD_C11 below.
Regards,
Gregory.
> > > Currently, DPDK application can offload the checksum check, and
> > > report it in the mbuf.
> > >
> > > However, as more and more applications are offloading some or all
> > > logic and action to the HW, there is a need to check the packet
> > > integrity so the right decision can be taken.
> > >
> > > The application logic can be positive meaning if the packet is valid
> > > jump / do actions, or negative if packet is not valid jump to SW /
> > > do actions (like drop) a, and add default flow
> >
> > There is a typo here. What should it be?
> >
> Simply remove the a.
>
> > > (match all in low priority) that will direct the miss packet to the
> > > miss path.
> > >
> > > Since currently rte_flow works in positive way the assumption is
> > > that the positive way will be the common way in this case also.
> > >
> > > When thinking what is the best API to implement such feature, we
> > > need to considure the following (in no specific order):
> >
> > s/considure/consider/
> >
>
> Will fix.
>
> > > 1. API breakage.
> > > 2. Simplicity.
> > > 3. Performance.
> > > 4. HW capabilities.
> > > 5. rte_flow limitation.
> > > 6. Flexibility.
> > >
> > > First option: Add integrity flags to each of the items.
> > > For example add checksum_ok to ipv4 item.
> > >
> > > Pros:
> > > 1. No new rte_flow item.
> > > 2. Simple in the way that on each item the app can see what checks
> > > are available.
> > >
> > > Cons:
> > > 1. API breakage.
> > > 2. increase number of flows, since app can't add global rule and
> > > must have dedicated flow for each of the flow combinations, for
> example
> > > matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
> > > result in 5 flows.
> > >
> > > Second option: dedicated item
> > >
> > > Pros:
> > > 1. No API breakage, and there will be no for some time due to having
> > > extra space. (by using bits)
> > > 2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
> > > IPv6.
> > > 3. Simplicity application can just look at one place to see all possible
> > > checks.
> > > 4. Allow future support for more tests.
> > >
> > > Cons:
> > > 1. New item, that holds number of fields from different items.
> > >
> > > For starter the following bits are suggested:
> > > 1. packet_ok - means that all HW checks depending on packet layer
> have
> > > passed. This may mean that in some HW such flow should be splited
> to
> > > number of flows or fail.
> > > 2. l2_ok - all check for layer 2 have passed.
> > > 3. l3_ok - all check for layer 3 have passed. If packet doesn't have
> > > l3 layer this check should fail.
> > > 4. l4_ok - all check for layer 4 have passed. If packet doesn't
> > > have l4 layer this check should fail.
> > > 5. l2_crc_ok - the layer 2 crc is O.K.
> > > 6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
> > > IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
> > > possible that checksum will be 0 and the l3_ok will be 1.
> > > 7. l4_csum_ok - layer 4 checksum is O.K.
> > > 8. l3_len_OK - check that the reported layer 3 len is smaller than the
> > > frame len.
> > >
> > > Example of usage:
> > > 1. check packets from all possible layers for integrity.
> > > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> > >
> > > 2. Check only packet with layer 4 (UDP / TCP)
> > > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
> > > l4_ok = 1
> > >
> > > Signed-off-by: Ori Kam <orika@nvidia.com>
> > > ---
> > > doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
> > > doc/guides/rel_notes/release_21_05.rst | 5 +++
> > > lib/librte_ethdev/rte_flow.h | 49
> ++++++++++++++++++++++++++
> > > 3 files changed, 74 insertions(+)
> > >
> > > diff --git a/doc/guides/prog_guide/rte_flow.rst
> > b/doc/guides/prog_guide/rte_flow.rst
> > > index e1b93ecedf..1dd2301a07 100644
> > > --- a/doc/guides/prog_guide/rte_flow.rst
> > > +++ b/doc/guides/prog_guide/rte_flow.rst
> > > @@ -1398,6 +1398,26 @@ Matches a eCPRI header.
> > > - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> > > - Default ``mask`` matches nothing, for all eCPRI messages.
> > >
> > > +Item: ``PACKET_INTEGRITY_CHECKS``
> > > +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> > > +
> > > +Matches packet integrity.
> > > +For some devices application needs to enable integration checks in
> > > +HW before using this item.
> > > +
> > > +- ``level``: the encapsulation level that should be checked. level
> > > +0 means the
> > > + default PMD mode (Can be inner most / outermost). value of 1
> > > +means
> > outermost
> > > + and higher value means inner header. See also RSS level.
> > > +- ``packet_ok``: All HW packet integrity checks have passed based
> > > +on the
> > max
> > > + layer of the packet.
> > > +- ``l2_ok``: all layer 2 HW integrity checks passed.
> > > +- ``l3_ok``: all layer 3 HW integrity checks passed.
> > > +- ``l4_ok``: all layer 4 HW integrity checks passed.
> > > +- ``l2_crc_ok``: layer 2 crc check passed.
> > > +- ``ipv4_csum_ok``: ipv4 checksum check passed.
> > > +- ``l4_csum_ok``: layer 4 checksum check passed.
> > > +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
> > > +
> > > Actions
> > > ~~~~~~~
> > >
> > > diff --git a/doc/guides/rel_notes/release_21_05.rst
> > b/doc/guides/rel_notes/release_21_05.rst
> > > index a0b907994a..986f749384 100644
> > > --- a/doc/guides/rel_notes/release_21_05.rst
> > > +++ b/doc/guides/rel_notes/release_21_05.rst
> > > @@ -168,6 +168,11 @@ New Features
> > > the events across multiple stages.
> > > * This also reduced the scheduling overhead on a event device.
> > >
> > > +* **Added packet integrity match to RTE flow rules.**
> >
> > Please remove "RTE", it has no meaning. All in DPDK is "RTE".
> >
>
> Sure.
>
> > > +
> > > + * Added ``PACKET_INTEGRITY_CHECKS`` flow item.
> >
> > It is RTE_FLOW_ITEM_TYPE_INTEGRITY
> >
> > > + * Added ``rte_flow_item_integrity`` data structure.
> > > +
> >
> > This text should be sorted before drivers.
> >
>
> Sure.
>
> > > --- a/lib/librte_ethdev/rte_flow.h
> > > +++ b/lib/librte_ethdev/rte_flow.h
> > > @@ -551,6 +551,17 @@ enum rte_flow_item_type {
> > > * See struct rte_flow_item_geneve_opt
> > > */
> > > RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
> > > +
> > > + /**
> > > + * [META]
> > > + *
> > > + * Matches on packet integrity.
> > > + * For some devices application needs to enable integration checks
> > > +in
> > HW
> > > + * before using this item.
> >
> > That's a bit fuzzy.
> > Do you mean some driver-specific API may be required?
> >
>
> I know it is a bit fuzzy but it is really HW dependent, for example in case of
> some drivers there is nothing to be done.
> In other cases the application may need to enable the RX checksum offload,
> other drivers may need this cap be enabled by HW configuration.
>
> > > + *
> > > + * See struct rte_flow_item_integrity.
> > > + */
> > > + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> > > };
> >
> > > +__extension__
> >
> > Why extension here?
> > If this is because of the anonymous union, it should be RTE_STD_C11
> > before the union.
> > Same for the struct.
> >
> O.K
>
The RTE_STD_C11 macro fails compilation on
RHEL-7.9 with gcc version 4.8.5 20150623 (Red Hat 4.8.5-44)
> > > +struct rte_flow_item_integrity {
> > > + uint32_t level;
> > > + /**< Packet encapsulation level the item should apply to.
> > > + * @see rte_flow_action_rss
> > > + */
> >
> > Please insert comments before the struct member.
> >
> O.K.
>
> > Instead of "Packet encapsulation", isn't it better understood as
> > "Tunnel encapsulation"? Not sure, please advise.
> >
> I have no strong feeling ether way, so I don't mind the change if you think it
> is clearer.
>
> > > + union {
> > > + struct {
> > > + uint64_t packet_ok:1;
> > > + /** The packet is valid after passing all HW checks.
> */
> >
> > The doxygen syntax is missing < but it will be fine when moved before.
> >
> Sure.
>
> > > + uint64_t l2_ok:1;
> > > + /**< L2 layer is valid after passing all HW checks. */
> > > + uint64_t l3_ok:1;
> > > + /**< L3 layer is valid after passing all HW checks. */
> > > + uint64_t l4_ok:1;
> > > + /**< L4 layer is valid after passing all HW checks. */
> > > + uint64_t l2_crc_ok:1;
> > > + /**< L2 layer crc is valid. */
> >
> > s/crc/CRC/
> >
> O.K.
>
> > > + uint64_t ipv4_csum_ok:1;
> > > + /**< IPv4 layer checksum is valid. */
> > > + uint64_t l4_csum_ok:1;
> > > + /**< L4 layer checksum is valid. */
> > > + uint64_t l3_len_ok:1;
> > > + /**< The l3 len is smaller than the frame len. */
> >
> > s/len/length/g
> >
> O.K.
>
> > > + uint64_t reserved:56;
> > > + };
> > > + uint64_t value;
> >
> > double space
> >
> Sure.
>
> > > + };
> > > +};
> > > +
> > > +#ifndef __cplusplus
> > > +static const struct rte_flow_item_integrity
> > > +rte_flow_item_integrity_mask = {
> > > + .level = 0,
> > > + .value = 0,
> > > +};
> > > +#endif
> >
> > I'm pretty sure it breaks with some C compilers.
> > Why not for C++?
> > I see we have it already in rte_flow.h so we can keep it, but that's
> > something to double check for a future fix.
> >
> Just like you said this is the practice used already,
>
> >
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v6 0/2] add packet integrity checks
2021-04-07 10:32 ` Ori Kam
` (3 preceding siblings ...)
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
@ 2021-04-18 15:51 ` Gregory Etelson
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 1/2] ethdev: " Gregory Etelson
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
6 siblings, 2 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-18 15:51 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
v6: update API comments
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 ++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 ++++++++++++
lib/librte_ethdev/rte_flow.h | 49 +++++++++++++++++++++
5 files changed, 141 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v6 1/2] ethdev: add packet integrity checks
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
@ 2021-04-18 15:51 ` Gregory Etelson
2021-04-18 18:11 ` Thomas Monjalon
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add support for integrity item Gregory Etelson
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-18 15:51 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
l3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
frame len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
---
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
3 files changed, 74 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..1dd2301a07 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,26 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+For some devices application needs to enable integration checks in HW
+before using this item.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 4 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 82ee71152f..b1c90f4d9f 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added packet integrity match to flow rules.**
+
+ * Added ``RTE_FLOW_ITEM_TYPE_INTEGRITY`` flow item.
+ * Added ``rte_flow_item_integrity`` data structure.
+
* **Added support for Marvell CN10K SoC drivers.**
Added Marvell CN10K SoC support. Marvell CN10K SoC are based on Octeon 10
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 203c4cde9a..bef5c770c5 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,17 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ * For some devices application needs to enable integration checks in HW
+ * before using this item.
+ *
+ * See struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1696,44 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+__extension__
+struct rte_flow_item_integrity {
+ /**< Tunnel encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ uint32_t level;
+ union {
+ struct {
+ /**< The packet is valid after passing all HW checks. */
+ uint64_t packet_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L2 layer CRC is valid. */
+ uint64_t l2_crc_ok:1;
+ /**< IPv4 layer checksum is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< The l3 length is smaller than the frame length. */
+ uint64_t l3_len_ok:1;
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v6 2/2] app/testpmd: add support for integrity item
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 1/2] ethdev: " Gregory Etelson
@ 2021-04-18 15:51 ` Gregory Etelson
1 sibling, 0 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-18 15:51 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 +++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0127d9e7d6..a1b0fa4a32 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -293,6 +293,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -968,6 +971,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1319,6 +1323,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3400,6 +3417,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index e3bfed566d..feaae9350b 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3791,6 +3791,13 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``integrity``: match packet integrity.
+
+ - ``level {unsigned}``: Packet encapsulation level the item should
+ apply to. See rte_flow_action_rss for details.
+ - ``value {unsigned}``: A bitmask that specify what packet elements
+ must be matched for integrity.
+
Actions list
^^^^^^^^^^^^
@@ -4925,6 +4932,27 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample integrity rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Integrity rules can be created by the following commands:
+
+Integrity rule that forwards valid TCP packets to group 1.
+TCP packet integrity is matched with the ``l4_ok`` bit 3.
+
+::
+
+ testpmd> flow create 0 ingress
+ pattern eth / ipv4 / tcp / integrity value mask 8 value spec 8 / end
+ actions jump group 1 / end
+
+Integrity rule that forwards invalid packets to application.
+General packet integrity is matched with the ``packet_ok`` bit 0.
+
+::
+
+ testpmd> flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
+
BPF Functions
--------------
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v5 1/2] ethdev: add packet integrity checks
2021-04-18 8:15 ` Gregory Etelson
@ 2021-04-18 18:00 ` Thomas Monjalon
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Monjalon @ 2021-04-18 18:00 UTC (permalink / raw)
To: Ori Kam, Gregory Etelson
Cc: dev, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
18/04/2021 10:15, Gregory Etelson:
> > > > +__extension__
> > >
> > > Why extension here?
> > > If this is because of the anonymous union, it should be RTE_STD_C11
> > > before the union.
> > > Same for the struct.
> > >
> > O.K
>
> The RTE_STD_C11 macro fails compilation on
> RHEL-7.9 with gcc version 4.8.5 20150623 (Red Hat 4.8.5-44)
This macro is used eveywhere in DPDK.
What is failing exactly?
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add packet integrity checks
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 1/2] ethdev: " Gregory Etelson
@ 2021-04-18 18:11 ` Thomas Monjalon
2021-04-18 19:24 ` Gregory Etelson
0 siblings, 1 reply; 68+ messages in thread
From: Thomas Monjalon @ 2021-04-18 18:11 UTC (permalink / raw)
To: Gregory Etelson
Cc: orika, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit,
jerinj, jerinjacobk, olivier.matz, viacheslavo, getelson, matan,
rasland
18/04/2021 17:51, Gregory Etelson:
> +__extension__
That still doesn't make sense, as in v5.
The things which require a macro are anonymous union,
anonymous struct and some bit fields with special sizes.
> +struct rte_flow_item_integrity {
> + /**< Tunnel encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
> + uint32_t level;
Should have RTE_STD_C11 here.
> + union {
Should have RTE_STD_C11 here.
> + struct {
> + /**< The packet is valid after passing all HW checks. */
> + uint64_t packet_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l2_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L2 layer CRC is valid. */
> + uint64_t l2_crc_ok:1;
> + /**< IPv4 layer checksum is valid. */
> + uint64_t ipv4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< The l3 length is smaller than the frame length. */
> + uint64_t l3_len_ok:1;
> + uint64_t reserved:56;
The reserved space looks useless since it is in an union.
> + };
I'm not sure about the 64-bit bitfields.
Maybe that's why you need __extension__.
I feel 32 bits are enough.
> + uint64_t value;
> + };
> +};
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add packet integrity checks
2021-04-18 18:11 ` Thomas Monjalon
@ 2021-04-18 19:24 ` Gregory Etelson
2021-04-18 21:30 ` Thomas Monjalon
0 siblings, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-18 19:24 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon
Cc: Ori Kam, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit,
jerinj, jerinjacobk, olivier.matz, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
Hello Thomas,
I modified the following drivers/net/mlx5/mlx5_flow_age.c compilation command
to produce pre-processed source code output:
1 # 1 "../drivers/net/mlx5/mlx5_flow_age.c"
2 # 1 "/.autodirect/mtrswgwork/getelson/src/dpdk/stable/build-dev//"
3 # 1 "<built-in>"
4 #define __STDC__ 1
** 5 #define __STDC_VERSION__ 201112L
6 #define __STDC_UTF_16__ 1
According to the result, the built-in __STDC_VERSION__ macro was set to 201112L.
Therefore, in rte_common.h, RTE_STD_C11 macro was evaluated as empty value:
Source code:
30 #ifndef typeof
31 #define typeof __typeof__
32 #endif
33
34 #ifndef asm
35 #define asm __asm__
36 #endif
37
38 /** C extension macro for environments lacking C11 features. */
39 #if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112L
40 #define RTE_STD_C11 __extension__
41 #else
42 #define RTE_STD_C11
43 #endif
Preprocessor output:
# 29 "../lib/librte_eal/include/rte_common.h" 2
#define typeof __typeof__
#define asm __asm__
#define RTE_STD_C11
According to these results, RTE_STD_C11 location in code has no significance,
because it will always be replaced with empty string.
After I changed RTE_STD_C11 condition like this:
- #if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112L
+ #if !defined(__STDC_VERSION__) || __STDC_VERSION__ <= 201112L
-__extension__
+RTE_STD_C11
struct rte_flow_item_integrity {
the compilation completed successfully both for 32 and 64 bits value.
Regards,
Gregory.
The compilation command was copied from `ninja --verbose` output:
cc -Idrivers/libtmp_rte_net_mlx5.a.p -Idrivers -I../drivers -Idrivers/net/mlx5 -I../drivers/net/mlx5 \
-Idrivers/net/mlx5/linux -I../drivers/net/mlx5/linux -Ilib/librte_ethdev -I../lib/librte_ethdev \
-I. -I.. -Iconfig -I../config -Ilib/librte_eal/include -I../lib/librte_eal/include -Ilib/librte_eal/linux/include \
-I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include -I../lib/librte_eal/x86/include \
-Ilib/librte_eal/common -I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal \
-Ilib/librte_kvargs -I../lib/librte_kvargs -Ilib/librte_metrics -I../lib/librte_metrics \
-Ilib/librte_telemetry -I../lib/librte_telemetry -Ilib/librte_net -I../lib/librte_net \
-Ilib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool -I../lib/librte_mempool \
-Ilib/librte_ring -I../lib/librte_ring -Ilib/librte_meter -I../lib/librte_meter -Idrivers/bus/pci \
-I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/librte_pci -I../lib/librte_pci \
-Idrivers/bus/vdev -I../drivers/bus/vdev -Ilib/librte_hash -I../lib/librte_hash \
-Ilib/librte_rcu -I../lib/librte_rcu -Idrivers/common/mlx5 -I../drivers/common/mlx5 \
-Idrivers/common/mlx5/linux -I../drivers/common/mlx5/linux -I/usr//usr/include \
-I/usr/include/libnl3 -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g \
-include rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat \
-Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes \
-Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes \
-Wundef -Wwrite-strings -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC \
-march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -std=c11 \
-Wno-strict-prototypes -D_BSD_SOURCE -D_DEFAULT_SOURCE -D_XOPEN_SOURCE=600 \
-pedantic -DPEDANTIC -MD -MQ drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_flow_age.c.o \
-MF drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_flow_age.c.o.d \
-o drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_flow_age.c.o -c ../drivers/net/mlx5/mlx5_flow_age.c
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Sunday, April 18, 2021 21:12
> To: Gregory Etelson <getelson@nvidia.com>
> Cc: Ori Kam <orika@nvidia.com>; ajit.khaparde@broadcom.com;
> andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; ferruh.yigit@intel.com;
> jerinj@marvell.com; jerinjacobk@gmail.com; olivier.matz@6wind.com;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Gregory Etelson
> <getelson@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan
> Darawsheh <rasland@nvidia.com>
> Subject: Re: [PATCH v6 1/2] ethdev: add packet integrity checks
>
> External email: Use caution opening links or attachments
>
>
> 18/04/2021 17:51, Gregory Etelson:
> > +__extension__
>
> That still doesn't make sense, as in v5.
> The things which require a macro are anonymous union, anonymous struct
> and some bit fields with special sizes.
>
> > +struct rte_flow_item_integrity {
> > + /**< Tunnel encapsulation level the item should apply to.
> > + * @see rte_flow_action_rss
> > + */
> > + uint32_t level;
>
> Should have RTE_STD_C11 here.
>
> > + union {
>
> Should have RTE_STD_C11 here.
>
> > + struct {
> > + /**< The packet is valid after passing all HW checks. */
> > + uint64_t packet_ok:1;
> > + /**< L2 layer is valid after passing all HW checks. */
> > + uint64_t l2_ok:1;
> > + /**< L3 layer is valid after passing all HW checks. */
> > + uint64_t l3_ok:1;
> > + /**< L4 layer is valid after passing all HW checks. */
> > + uint64_t l4_ok:1;
> > + /**< L2 layer CRC is valid. */
> > + uint64_t l2_crc_ok:1;
> > + /**< IPv4 layer checksum is valid. */
> > + uint64_t ipv4_csum_ok:1;
> > + /**< L4 layer checksum is valid. */
> > + uint64_t l4_csum_ok:1;
> > + /**< The l3 length is smaller than the frame length. */
> > + uint64_t l3_len_ok:1;
> > + uint64_t reserved:56;
>
> The reserved space looks useless since it is in an union.
>
> > + };
>
> I'm not sure about the 64-bit bitfields.
> Maybe that's why you need __extension__.
> I feel 32 bits are enough.
>
> > + uint64_t value;
> > + };
> > +};
>
>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v6 1/2] ethdev: add packet integrity checks
2021-04-18 19:24 ` Gregory Etelson
@ 2021-04-18 21:30 ` Thomas Monjalon
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Monjalon @ 2021-04-18 21:30 UTC (permalink / raw)
To: Gregory Etelson
Cc: Ori Kam, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit,
jerinj, jerinjacobk, olivier.matz, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh
Please read again the comment I did below,
and try 32-bit bitfield instead of 64-bit.
18/04/2021 21:24, Gregory Etelson:
> Hello Thomas,
>
> I modified the following drivers/net/mlx5/mlx5_flow_age.c compilation command
> to produce pre-processed source code output:
>
> 1 # 1 "../drivers/net/mlx5/mlx5_flow_age.c"
> 2 # 1 "/.autodirect/mtrswgwork/getelson/src/dpdk/stable/build-dev//"
> 3 # 1 "<built-in>"
> 4 #define __STDC__ 1
> ** 5 #define __STDC_VERSION__ 201112L
> 6 #define __STDC_UTF_16__ 1
>
> According to the result, the built-in __STDC_VERSION__ macro was set to 201112L.
> Therefore, in rte_common.h, RTE_STD_C11 macro was evaluated as empty value:
>
> Source code:
> 30 #ifndef typeof
> 31 #define typeof __typeof__
> 32 #endif
> 33
> 34 #ifndef asm
> 35 #define asm __asm__
> 36 #endif
> 37
> 38 /** C extension macro for environments lacking C11 features. */
> 39 #if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112L
> 40 #define RTE_STD_C11 __extension__
> 41 #else
> 42 #define RTE_STD_C11
> 43 #endif
>
> Preprocessor output:
> # 29 "../lib/librte_eal/include/rte_common.h" 2
> #define typeof __typeof__
> #define asm __asm__
> #define RTE_STD_C11
>
> According to these results, RTE_STD_C11 location in code has no significance,
> because it will always be replaced with empty string.
> After I changed RTE_STD_C11 condition like this:
>
> - #if !defined(__STDC_VERSION__) || __STDC_VERSION__ < 201112L
> + #if !defined(__STDC_VERSION__) || __STDC_VERSION__ <= 201112L
>
> -__extension__
> +RTE_STD_C11
> struct rte_flow_item_integrity {
>
> the compilation completed successfully both for 32 and 64 bits value.
>
> Regards,
> Gregory.
>
> The compilation command was copied from `ninja --verbose` output:
> cc -Idrivers/libtmp_rte_net_mlx5.a.p -Idrivers -I../drivers -Idrivers/net/mlx5 -I../drivers/net/mlx5 \
> -Idrivers/net/mlx5/linux -I../drivers/net/mlx5/linux -Ilib/librte_ethdev -I../lib/librte_ethdev \
> -I. -I.. -Iconfig -I../config -Ilib/librte_eal/include -I../lib/librte_eal/include -Ilib/librte_eal/linux/include \
> -I../lib/librte_eal/linux/include -Ilib/librte_eal/x86/include -I../lib/librte_eal/x86/include \
> -Ilib/librte_eal/common -I../lib/librte_eal/common -Ilib/librte_eal -I../lib/librte_eal \
> -Ilib/librte_kvargs -I../lib/librte_kvargs -Ilib/librte_metrics -I../lib/librte_metrics \
> -Ilib/librte_telemetry -I../lib/librte_telemetry -Ilib/librte_net -I../lib/librte_net \
> -Ilib/librte_mbuf -I../lib/librte_mbuf -Ilib/librte_mempool -I../lib/librte_mempool \
> -Ilib/librte_ring -I../lib/librte_ring -Ilib/librte_meter -I../lib/librte_meter -Idrivers/bus/pci \
> -I../drivers/bus/pci -I../drivers/bus/pci/linux -Ilib/librte_pci -I../lib/librte_pci \
> -Idrivers/bus/vdev -I../drivers/bus/vdev -Ilib/librte_hash -I../lib/librte_hash \
> -Ilib/librte_rcu -I../lib/librte_rcu -Idrivers/common/mlx5 -I../drivers/common/mlx5 \
> -Idrivers/common/mlx5/linux -I../drivers/common/mlx5/linux -I/usr//usr/include \
> -I/usr/include/libnl3 -pipe -D_FILE_OFFSET_BITS=64 -Wall -Winvalid-pch -g \
> -include rte_config.h -Wextra -Wcast-qual -Wdeprecated -Wformat \
> -Wformat-nonliteral -Wformat-security -Wmissing-declarations -Wmissing-prototypes \
> -Wnested-externs -Wold-style-definition -Wpointer-arith -Wsign-compare -Wstrict-prototypes \
> -Wundef -Wwrite-strings -Wno-missing-field-initializers -D_GNU_SOURCE -fPIC \
> -march=native -DALLOW_EXPERIMENTAL_API -DALLOW_INTERNAL_API -std=c11 \
> -Wno-strict-prototypes -D_BSD_SOURCE -D_DEFAULT_SOURCE -D_XOPEN_SOURCE=600 \
> -pedantic -DPEDANTIC -MD -MQ drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_flow_age.c.o \
> -MF drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_flow_age.c.o.d \
> -o drivers/libtmp_rte_net_mlx5.a.p/net_mlx5_mlx5_flow_age.c.o -c ../drivers/net/mlx5/mlx5_flow_age.c
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Sunday, April 18, 2021 21:12
> > To: Gregory Etelson <getelson@nvidia.com>
> > Cc: Ori Kam <orika@nvidia.com>; ajit.khaparde@broadcom.com;
> > andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; ferruh.yigit@intel.com;
> > jerinj@marvell.com; jerinjacobk@gmail.com; olivier.matz@6wind.com;
> > Slava Ovsiienko <viacheslavo@nvidia.com>; Gregory Etelson
> > <getelson@nvidia.com>; Matan Azrad <matan@nvidia.com>; Raslan
> > Darawsheh <rasland@nvidia.com>
> > Subject: Re: [PATCH v6 1/2] ethdev: add packet integrity checks
> >
> > External email: Use caution opening links or attachments
> >
> >
> > 18/04/2021 17:51, Gregory Etelson:
> > > +__extension__
> >
> > That still doesn't make sense, as in v5.
> > The things which require a macro are anonymous union, anonymous struct
> > and some bit fields with special sizes.
> >
> > > +struct rte_flow_item_integrity {
> > > + /**< Tunnel encapsulation level the item should apply to.
> > > + * @see rte_flow_action_rss
> > > + */
> > > + uint32_t level;
> >
> > Should have RTE_STD_C11 here.
> >
> > > + union {
> >
> > Should have RTE_STD_C11 here.
> >
> > > + struct {
> > > + /**< The packet is valid after passing all HW checks. */
> > > + uint64_t packet_ok:1;
> > > + /**< L2 layer is valid after passing all HW checks. */
> > > + uint64_t l2_ok:1;
> > > + /**< L3 layer is valid after passing all HW checks. */
> > > + uint64_t l3_ok:1;
> > > + /**< L4 layer is valid after passing all HW checks. */
> > > + uint64_t l4_ok:1;
> > > + /**< L2 layer CRC is valid. */
> > > + uint64_t l2_crc_ok:1;
> > > + /**< IPv4 layer checksum is valid. */
> > > + uint64_t ipv4_csum_ok:1;
> > > + /**< L4 layer checksum is valid. */
> > > + uint64_t l4_csum_ok:1;
> > > + /**< The l3 length is smaller than the frame length. */
> > > + uint64_t l3_len_ok:1;
> > > + uint64_t reserved:56;
> >
> > The reserved space looks useless since it is in an union.
> >
> > > + };
> >
> > I'm not sure about the 64-bit bitfields.
> > Maybe that's why you need __extension__.
> > I feel 32 bits are enough.
> >
> > > + uint64_t value;
> > > + };
> > > +};
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v7 0/2] add packet integrity checks
2021-04-07 10:32 ` Ori Kam
` (4 preceding siblings ...)
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
@ 2021-04-19 8:29 ` Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 1/2] ethdev: " Gregory Etelson
` (2 more replies)
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
6 siblings, 3 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 8:29 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
v7: move the __extension__ macro in rte_flow_item_integrity.
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 ++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 ++++++++++++
lib/librte_ethdev/rte_flow.h | 49 +++++++++++++++++++++
5 files changed, 141 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v7 1/2] ethdev: add packet integrity checks
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
@ 2021-04-19 8:29 ` Gregory Etelson
2021-04-19 8:47 ` Thomas Monjalon
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 11:20 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Ferruh Yigit
2 siblings, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 8:29 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
l3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
frame len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 20 +++++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
lib/librte_ethdev/rte_flow.h | 49 ++++++++++++++++++++++++++
3 files changed, 74 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..1dd2301a07 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,26 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+For some devices application needs to enable integration checks in HW
+before using this item.
+
+- ``level``: the encapsulation level that should be checked. level 0 means the
+ default PMD mode (Can be inner most / outermost). value of 1 means outermost
+ and higher value means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the max
+ layer of the packet.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 4 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 crc check passed.
+- ``ipv4_csum_ok``: ipv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 82ee71152f..b1c90f4d9f 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added packet integrity match to flow rules.**
+
+ * Added ``RTE_FLOW_ITEM_TYPE_INTEGRITY`` flow item.
+ * Added ``rte_flow_item_integrity`` data structure.
+
* **Added support for Marvell CN10K SoC drivers.**
Added Marvell CN10K SoC support. Marvell CN10K SoC are based on Octeon 10
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 203c4cde9a..2450b30fc1 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,17 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ * For some devices application needs to enable integration checks in HW
+ * before using this item.
+ *
+ * See struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1696,44 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+struct rte_flow_item_integrity {
+ /**< Tunnel encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ uint32_t level;
+ union {
+ __extension__
+ struct {
+ /**< The packet is valid after passing all HW checks. */
+ uint64_t packet_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L2 layer CRC is valid. */
+ uint64_t l2_crc_ok:1;
+ /**< IPv4 layer checksum is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< The l3 length is smaller than the frame length. */
+ uint64_t l3_len_ok:1;
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v7 2/2] app/testpmd: add support for integrity item
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 1/2] ethdev: " Gregory Etelson
@ 2021-04-19 8:29 ` Gregory Etelson
2021-04-19 11:20 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Ferruh Yigit
2 siblings, 0 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 8:29 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 +++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0127d9e7d6..a1b0fa4a32 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -293,6 +293,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -968,6 +971,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1319,6 +1323,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3400,6 +3417,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index e3bfed566d..feaae9350b 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3791,6 +3791,13 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``integrity``: match packet integrity.
+
+ - ``level {unsigned}``: Packet encapsulation level the item should
+ apply to. See rte_flow_action_rss for details.
+ - ``value {unsigned}``: A bitmask that specify what packet elements
+ must be matched for integrity.
+
Actions list
^^^^^^^^^^^^
@@ -4925,6 +4932,27 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample integrity rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Integrity rules can be created by the following commands:
+
+Integrity rule that forwards valid TCP packets to group 1.
+TCP packet integrity is matched with the ``l4_ok`` bit 3.
+
+::
+
+ testpmd> flow create 0 ingress
+ pattern eth / ipv4 / tcp / integrity value mask 8 value spec 8 / end
+ actions jump group 1 / end
+
+Integrity rule that forwards invalid packets to application.
+General packet integrity is matched with the ``packet_ok`` bit 0.
+
+::
+
+ testpmd> flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
+
BPF Functions
--------------
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v7 1/2] ethdev: add packet integrity checks
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 1/2] ethdev: " Gregory Etelson
@ 2021-04-19 8:47 ` Thomas Monjalon
0 siblings, 0 replies; 68+ messages in thread
From: Thomas Monjalon @ 2021-04-19 8:47 UTC (permalink / raw)
To: Gregory Etelson
Cc: orika, ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit,
jerinj, jerinjacobk, olivier.matz, viacheslavo, matan, rasland
19/04/2021 10:29, Gregory Etelson:
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +For some devices application needs to enable integration checks in HW
> +before using this item.
> +
> +- ``level``: the encapsulation level that should be checked. level 0 means the
> + default PMD mode (Can be inner most / outermost). value of 1 means outermost
> + and higher value means inner header. See also RSS level.
Would be nicer to make sub-list for levels.
Please start sentences with a capital letter.
> +- ``packet_ok``: All HW packet integrity checks have passed based on the max
> + layer of the packet.
"based on the max layer" is not clear. Do you mean all layers?
> +- ``l2_ok``: all layer 2 HW integrity checks passed.
> +- ``l3_ok``: all layer 3 HW integrity checks passed.
> +- ``l4_ok``: all layer 4 HW integrity checks passed.
> +- ``l2_crc_ok``: layer 2 crc check passed.
s/crc/CRC/
> +- ``ipv4_csum_ok``: ipv4 checksum check passed.
s/ipv4/IPv4/
> +- ``l4_csum_ok``: layer 4 checksum check passed.
> +- ``l3_len_ok``: the layer 3 len is smaller than the frame len.
s/len/length/
> --- a/doc/guides/rel_notes/release_21_05.rst
> +++ b/doc/guides/rel_notes/release_21_05.rst
> +* **Added packet integrity match to flow rules.**
> +
> + * Added ``RTE_FLOW_ITEM_TYPE_INTEGRITY`` flow item.
> + * Added ``rte_flow_item_integrity`` data structure.
It should be moved with other ethdev changes.
> +
> * **Added support for Marvell CN10K SoC drivers.**
>
> Added Marvell CN10K SoC support. Marvell CN10K SoC are based on Octeon 10
> --- a/lib/librte_ethdev/rte_flow.h
> +++ b/lib/librte_ethdev/rte_flow.h
> + /**
> + * [META]
> + *
> + * Matches on packet integrity.
> + * For some devices application needs to enable integration checks in HW
> + * before using this item.
> + *
> + * See struct rte_flow_item_integrity.
Better to use @see syntax.
> + */
> + RTE_FLOW_ITEM_TYPE_INTEGRITY,
> };
> +struct rte_flow_item_integrity {
> + /**< Tunnel encapsulation level the item should apply to.
> + * @see rte_flow_action_rss
> + */
> + uint32_t level;
missing RTE_STD_C11 here for anonymous union.
> + union {
> + __extension__
> + struct {
> + /**< The packet is valid after passing all HW checks. */
> + uint64_t packet_ok:1;
> + /**< L2 layer is valid after passing all HW checks. */
> + uint64_t l2_ok:1;
> + /**< L3 layer is valid after passing all HW checks. */
> + uint64_t l3_ok:1;
> + /**< L4 layer is valid after passing all HW checks. */
> + uint64_t l4_ok:1;
> + /**< L2 layer CRC is valid. */
> + uint64_t l2_crc_ok:1;
> + /**< IPv4 layer checksum is valid. */
> + uint64_t ipv4_csum_ok:1;
> + /**< L4 layer checksum is valid. */
> + uint64_t l4_csum_ok:1;
> + /**< The l3 length is smaller than the frame length. */
> + uint64_t l3_len_ok:1;
> + uint64_t reserved:56;
> + };
> + uint64_t value;
> + };
> +};
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/2] add packet integrity checks
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 1/2] ethdev: " Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add support for integrity item Gregory Etelson
@ 2021-04-19 11:20 ` Ferruh Yigit
2021-04-19 12:08 ` Gregory Etelson
2 siblings, 1 reply; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-19 11:20 UTC (permalink / raw)
To: Gregory Etelson, orika
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, jerinjacobk,
olivier.matz, thomas, viacheslavo, matan, rasland
On 4/19/2021 9:29 AM, Gregory Etelson wrote:
> v7: move the __extension__ macro in rte_flow_item_integrity.
>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
>
> Ori Kam (2):
> ethdev: add packet integrity checks
> app/testpmd: add support for integrity item
>
Can you please check the build error on CI:
http://mails.dpdk.org/archives/test-report/2021-April/189118.html
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v7 0/2] add packet integrity checks
2021-04-19 11:20 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Ferruh Yigit
@ 2021-04-19 12:08 ` Gregory Etelson
0 siblings, 0 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 12:08 UTC (permalink / raw)
To: Ferruh Yigit, Ori Kam
Cc: ajit.khaparde, andrew.rybchenko, dev, jerinj, jerinjacobk,
olivier.matz, NBU-Contact-Thomas Monjalon, Slava Ovsiienko,
Matan Azrad, Raslan Darawsheh
> On 4/19/2021 9:29 AM, Gregory Etelson wrote:
> > v7: move the __extension__ macro in rte_flow_item_integrity.
> >
> > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> >
> > Ori Kam (2):
> > ethdev: add packet integrity checks
> > app/testpmd: add support for integrity item
> >
>
> Can you please check the build error on CI:
> http://mails.dpdk.org/archives/test-report/2021-April/189118.html
Hello Ferruh,
Thank you for the update.
I'm testing a new patch and will post it.
Regards,
Gregory
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v8 0/2] add packet integrity checks
2021-04-07 10:32 ` Ori Kam
` (5 preceding siblings ...)
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
@ 2021-04-19 12:44 ` Gregory Etelson
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 1/2] ethdev: " Gregory Etelson
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item Gregory Etelson
6 siblings, 2 replies; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 12:44 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
v8:
Update documents.
Fix RTE_STD_C11 macro usage in rte_flow_item_integrity.
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ori Kam (2):
ethdev: add packet integrity checks
app/testpmd: add support for integrity item
app/test-pmd/cmdline_flow.c | 39 ++++++++++++++++
doc/guides/prog_guide/rte_flow.rst | 22 +++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 ++++++++++++
lib/librte_ethdev/rte_flow.h | 50 +++++++++++++++++++++
5 files changed, 144 insertions(+)
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v8 1/2] ethdev: add packet integrity checks
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
@ 2021-04-19 12:44 ` Gregory Etelson
2021-04-19 14:09 ` Ajit Khaparde
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item Gregory Etelson
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 12:44 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland
From: Ori Kam <orika@nvidia.com>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to ipv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. increase number of flows, since app can't add global rule and
must have dedicated flow for each of the flow combinations, for example
matching on icmp traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the icmp or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be splited to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
l3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have l4 layer this check should fail.
5. l2_crc_ok - the layer 2 crc is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. it is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. it is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 len is smaller than the
frame len.
Example of usage:
1. check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
doc/guides/prog_guide/rte_flow.rst | 22 ++++++++++++
doc/guides/rel_notes/release_21_05.rst | 5 +++
lib/librte_ethdev/rte_flow.h | 50 ++++++++++++++++++++++++++
3 files changed, 77 insertions(+)
diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index e1b93ecedf..04b598390d 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -1398,6 +1398,28 @@ Matches a eCPRI header.
- ``hdr``: eCPRI header definition (``rte_ecpri.h``).
- Default ``mask`` matches nothing, for all eCPRI messages.
+Item: ``PACKET_INTEGRITY_CHECKS``
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Matches packet integrity.
+For some devices application needs to enable integration checks in HW
+before using this item.
+
+- ``level``: the encapsulation level that should be checked:
+ - ``level == 0`` means the default PMD mode (can be inner most / outermost).
+ - ``level == 1`` means outermost header.
+ - ``level > 1`` means inner header. See also RSS level.
+- ``packet_ok``: All HW packet integrity checks have passed based on the
+ topmost network layer. For example, for ICMP packet the topmost network
+ layer is L3 and for TCP or UDP packet the topmost network layer is L4.
+- ``l2_ok``: all layer 2 HW integrity checks passed.
+- ``l3_ok``: all layer 3 HW integrity checks passed.
+- ``l4_ok``: all layer 4 HW integrity checks passed.
+- ``l2_crc_ok``: layer 2 CRC check passed.
+- ``ipv4_csum_ok``: IPv4 checksum check passed.
+- ``l4_csum_ok``: layer 4 checksum check passed.
+- ``l3_len_ok``: the layer 3 length is smaller than the frame length.
+
Actions
~~~~~~~
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 82ee71152f..cedb6fc7aa 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -84,6 +84,11 @@ New Features
So that it can meter traffic by packet per second.
Packet_mode must be 0 when it is bytes mode.
+* **Added packet integrity match to flow rules.**
+
+ * Added ``RTE_FLOW_ITEM_TYPE_INTEGRITY`` flow item.
+ * Added ``rte_flow_item_integrity`` data structure.
+
* **Updated Arkville PMD driver.**
Updated Arkville net driver with new features and improvements, including:
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 203c4cde9a..5ccf5ba7ba 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -551,6 +551,17 @@ enum rte_flow_item_type {
* See struct rte_flow_item_geneve_opt
*/
RTE_FLOW_ITEM_TYPE_GENEVE_OPT,
+
+ /**
+ * [META]
+ *
+ * Matches on packet integrity.
+ * For some devices application needs to enable integration checks in HW
+ * before using this item.
+ *
+ * @see struct rte_flow_item_integrity.
+ */
+ RTE_FLOW_ITEM_TYPE_INTEGRITY,
};
/**
@@ -1685,6 +1696,45 @@ rte_flow_item_geneve_opt_mask = {
};
#endif
+struct rte_flow_item_integrity {
+ /**< Tunnel encapsulation level the item should apply to.
+ * @see rte_flow_action_rss
+ */
+ uint32_t level;
+ RTE_STD_C11
+ union {
+ __extension__
+ struct {
+ /**< The packet is valid after passing all HW checks. */
+ uint64_t packet_ok:1;
+ /**< L2 layer is valid after passing all HW checks. */
+ uint64_t l2_ok:1;
+ /**< L3 layer is valid after passing all HW checks. */
+ uint64_t l3_ok:1;
+ /**< L4 layer is valid after passing all HW checks. */
+ uint64_t l4_ok:1;
+ /**< L2 layer CRC is valid. */
+ uint64_t l2_crc_ok:1;
+ /**< IPv4 layer checksum is valid. */
+ uint64_t ipv4_csum_ok:1;
+ /**< L4 layer checksum is valid. */
+ uint64_t l4_csum_ok:1;
+ /**< The l3 length is smaller than the frame length. */
+ uint64_t l3_len_ok:1;
+ uint64_t reserved:56;
+ };
+ uint64_t value;
+ };
+};
+
+#ifndef __cplusplus
+static const struct rte_flow_item_integrity
+rte_flow_item_integrity_mask = {
+ .level = 0,
+ .value = 0,
+};
+#endif
+
/**
* Matching pattern item definition.
*
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 1/2] ethdev: " Gregory Etelson
@ 2021-04-19 12:44 ` Gregory Etelson
2021-04-19 14:09 ` Ajit Khaparde
1 sibling, 1 reply; 68+ messages in thread
From: Gregory Etelson @ 2021-04-19 12:44 UTC (permalink / raw)
To: orika
Cc: ajit.khaparde, andrew.rybchenko, dev, ferruh.yigit, jerinj,
jerinjacobk, olivier.matz, thomas, viacheslavo, getelson, matan,
rasland, Xiaoyun Li
From: Ori Kam <orika@nvidia.com>
The integrity item allows the application to match
on the integrity of a packet.
use example:
match that packet integrity checks are ok. The checks depend on
packet layers. For example ICMP packet will not check L4 level.
flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
match that L4 packet is ok - check L2 & L3 & L4 layers:
flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
Signed-off-by: Ori Kam <orika@nvidia.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
---
app/test-pmd/cmdline_flow.c | 39 +++++++++++++++++++++
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 28 +++++++++++++++
2 files changed, 67 insertions(+)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 0127d9e7d6..a1b0fa4a32 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -293,6 +293,9 @@ enum index {
ITEM_GENEVE_OPT_TYPE,
ITEM_GENEVE_OPT_LENGTH,
ITEM_GENEVE_OPT_DATA,
+ ITEM_INTEGRITY,
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
/* Validate/create actions. */
ACTIONS,
@@ -968,6 +971,7 @@ static const enum index next_item[] = {
ITEM_PFCP,
ITEM_ECPRI,
ITEM_GENEVE_OPT,
+ ITEM_INTEGRITY,
END_SET,
ZERO,
};
@@ -1319,6 +1323,19 @@ static const enum index item_geneve_opt[] = {
ZERO,
};
+static const enum index item_integrity[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ZERO,
+};
+
+static const enum index item_integrity_lv[] = {
+ ITEM_INTEGRITY_LEVEL,
+ ITEM_INTEGRITY_VALUE,
+ ITEM_NEXT,
+ ZERO,
+};
+
static const enum index next_action[] = {
ACTION_END,
ACTION_VOID,
@@ -3400,6 +3417,28 @@ static const struct token token_list[] = {
(sizeof(struct rte_flow_item_geneve_opt),
ITEM_GENEVE_OPT_DATA_SIZE)),
},
+ [ITEM_INTEGRITY] = {
+ .name = "integrity",
+ .help = "match packet integrity",
+ .priv = PRIV_ITEM(INTEGRITY,
+ sizeof(struct rte_flow_item_integrity)),
+ .next = NEXT(item_integrity),
+ .call = parse_vc,
+ },
+ [ITEM_INTEGRITY_LEVEL] = {
+ .name = "level",
+ .help = "integrity level",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, level)),
+ },
+ [ITEM_INTEGRITY_VALUE] = {
+ .name = "value",
+ .help = "integrity value",
+ .next = NEXT(item_integrity_lv, NEXT_ENTRY(UNSIGNED),
+ item_param),
+ .args = ARGS(ARGS_ENTRY(struct rte_flow_item_integrity, value)),
+ },
/* Validate/create actions. */
[ACTIONS] = {
.name = "actions",
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index e3bfed566d..feaae9350b 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3791,6 +3791,13 @@ This section lists supported pattern items and their attributes, if any.
- ``s_field {unsigned}``: S field.
- ``seid {unsigned}``: session endpoint identifier.
+- ``integrity``: match packet integrity.
+
+ - ``level {unsigned}``: Packet encapsulation level the item should
+ apply to. See rte_flow_action_rss for details.
+ - ``value {unsigned}``: A bitmask that specify what packet elements
+ must be matched for integrity.
+
Actions list
^^^^^^^^^^^^
@@ -4925,6 +4932,27 @@ NVGRE encapsulation header and sent to port id 0.
testpmd> flow create 0 ingress transfer pattern eth / end actions
sample ratio 1 index 0 / port_id id 2 / end
+Sample integrity rules
+~~~~~~~~~~~~~~~~~~~~~~
+
+Integrity rules can be created by the following commands:
+
+Integrity rule that forwards valid TCP packets to group 1.
+TCP packet integrity is matched with the ``l4_ok`` bit 3.
+
+::
+
+ testpmd> flow create 0 ingress
+ pattern eth / ipv4 / tcp / integrity value mask 8 value spec 8 / end
+ actions jump group 1 / end
+
+Integrity rule that forwards invalid packets to application.
+General packet integrity is matched with the ``packet_ok`` bit 0.
+
+::
+
+ testpmd> flow create 0 ingress pattern integrity value mask 1 value spec 0 / end actions queue index 0 / end
+
BPF Functions
--------------
--
2.25.1
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] ethdev: add packet integrity checks
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 1/2] ethdev: " Gregory Etelson
@ 2021-04-19 14:09 ` Ajit Khaparde
2021-04-19 16:34 ` Thomas Monjalon
0 siblings, 1 reply; 68+ messages in thread
From: Ajit Khaparde @ 2021-04-19 14:09 UTC (permalink / raw)
To: Gregory Etelson
Cc: Ori Kam, Andrew Rybchenko, dpdk-dev, Ferruh Yigit,
Jerin Jacob Kollanukkaran, Jerin Jacob, Olivier Matz,
Thomas Monjalon, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh
[-- Attachment #1: Type: text/plain, Size: 1433 bytes --]
::[snip]::
> Example of usage:
> 1. check packets from all possible layers for integrity.
> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>
> 2. Check only packet with layer 4 (UDP / TCP)
> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> doc/guides/prog_guide/rte_flow.rst | 22 ++++++++++++
> doc/guides/rel_notes/release_21_05.rst | 5 +++
> lib/librte_ethdev/rte_flow.h | 50 ++++++++++++++++++++++++++
> 3 files changed, 77 insertions(+)
>
> diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
> index e1b93ecedf..04b598390d 100644
> --- a/doc/guides/prog_guide/rte_flow.rst
> +++ b/doc/guides/prog_guide/rte_flow.rst
> @@ -1398,6 +1398,28 @@ Matches a eCPRI header.
> - ``hdr``: eCPRI header definition (``rte_ecpri.h``).
> - Default ``mask`` matches nothing, for all eCPRI messages.
>
> +Item: ``PACKET_INTEGRITY_CHECKS``
> +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> +
> +Matches packet integrity.
> +For some devices application needs to enable integration checks in HW
> +before using this item.
> +
> +- ``level``: the encapsulation level that should be checked:
> + - ``level == 0`` means the default PMD mode (can be inner most / outermost).
s/inner most/ innermost
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item Gregory Etelson
@ 2021-04-19 14:09 ` Ajit Khaparde
0 siblings, 0 replies; 68+ messages in thread
From: Ajit Khaparde @ 2021-04-19 14:09 UTC (permalink / raw)
To: Gregory Etelson
Cc: Ori Kam, Andrew Rybchenko, dpdk-dev, Ferruh Yigit,
Jerin Jacob Kollanukkaran, Jerin Jacob, Olivier Matz,
Thomas Monjalon, Slava Ovsiienko, Matan Azrad, Raslan Darawsheh,
Xiaoyun Li
[-- Attachment #1: Type: text/plain, Size: 760 bytes --]
On Mon, Apr 19, 2021 at 5:45 AM Gregory Etelson <getelson@nvidia.com> wrote:
>
> From: Ori Kam <orika@nvidia.com>
>
> The integrity item allows the application to match
> on the integrity of a packet.
>
> use example:
> match that packet integrity checks are ok. The checks depend on
> packet layers. For example ICMP packet will not check L4 level.
> flow create 0 ingress pattern integrity value mask 0x01 value spec 0x01
> match that L4 packet is ok - check L2 & L3 & L4 layers:
> flow create 0 ingress pattern integrity value mask 0xfe value spec 0xfe
>
> Signed-off-by: Ori Kam <orika@nvidia.com>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] ethdev: add packet integrity checks
2021-04-19 14:09 ` Ajit Khaparde
@ 2021-04-19 16:34 ` Thomas Monjalon
2021-04-19 17:06 ` Ferruh Yigit
0 siblings, 1 reply; 68+ messages in thread
From: Thomas Monjalon @ 2021-04-19 16:34 UTC (permalink / raw)
To: Gregory Etelson
Cc: dev, Ori Kam, Andrew Rybchenko, Ferruh Yigit,
Jerin Jacob Kollanukkaran, Jerin Jacob, Olivier Matz,
Slava Ovsiienko, Matan Azrad, Raslan Darawsheh, Ajit Khaparde
19/04/2021 16:09, Ajit Khaparde:
> ::[snip]::
> > Example of usage:
> > 1. check packets from all possible layers for integrity.
> > flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
> >
> > 2. Check only packet with layer 4 (UDP / TCP)
> > flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
> >
> > Signed-off-by: Ori Kam <orika@nvidia.com>
> > Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [flat|nested] 68+ messages in thread
* Re: [dpdk-dev] [PATCH v8 1/2] ethdev: add packet integrity checks
2021-04-19 16:34 ` Thomas Monjalon
@ 2021-04-19 17:06 ` Ferruh Yigit
0 siblings, 0 replies; 68+ messages in thread
From: Ferruh Yigit @ 2021-04-19 17:06 UTC (permalink / raw)
To: Thomas Monjalon, Gregory Etelson
Cc: dev, Ori Kam, Andrew Rybchenko, Jerin Jacob Kollanukkaran,
Jerin Jacob, Olivier Matz, Slava Ovsiienko, Matan Azrad,
Raslan Darawsheh, Ajit Khaparde
On 4/19/2021 5:34 PM, Thomas Monjalon wrote:
> 19/04/2021 16:09, Ajit Khaparde:
>> ::[snip]::
>>> Example of usage:
>>> 1. check packets from all possible layers for integrity.
>>> flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
>>>
>>> 2. Check only packet with layer 4 (UDP / TCP)
>>> flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1 l4_ok = 1
>>>
>>> Signed-off-by: Ori Kam <orika@nvidia.com>
>>> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
>> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
>
>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 68+ messages in thread
end of thread, other threads:[~2021-04-19 17:06 UTC | newest]
Thread overview: 68+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-05 18:04 [dpdk-dev] [PATCH] ethdev: add packet integrity checks Ori Kam
2021-04-06 7:39 ` Jerin Jacob
2021-04-07 10:32 ` Ori Kam
2021-04-07 11:01 ` Jerin Jacob
2021-04-07 22:15 ` Ori Kam
2021-04-08 7:44 ` Jerin Jacob
2021-04-11 4:12 ` Ajit Khaparde
2021-04-11 6:03 ` Ori Kam
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 0/2] " Gregory Etelson
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 1/2] ethdev: " Gregory Etelson
2021-04-13 15:16 ` [dpdk-dev] [PATCH v3 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-13 17:15 ` Ferruh Yigit
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 0/2] add packet integrity checks Gregory Etelson
2021-04-14 12:56 ` [dpdk-dev] [PATCH v4 1/2] ethdev: " Gregory Etelson
2021-04-14 13:27 ` Ferruh Yigit
2021-04-14 13:31 ` Ferruh Yigit
2021-04-14 12:57 ` [dpdk-dev] [PATCH v4 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Gregory Etelson
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 1/2] ethdev: " Gregory Etelson
2021-04-14 17:24 ` Ajit Khaparde
2021-04-15 15:10 ` Ori Kam
2021-04-15 15:25 ` Ajit Khaparde
2021-04-15 16:46 ` Thomas Monjalon
2021-04-16 7:43 ` Ori Kam
2021-04-18 8:15 ` Gregory Etelson
2021-04-18 18:00 ` Thomas Monjalon
2021-04-14 16:09 ` [dpdk-dev] [PATCH v5 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-14 16:26 ` [dpdk-dev] [PATCH v5 0/2] add packet integrity checks Ferruh Yigit
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 1/2] ethdev: " Gregory Etelson
2021-04-18 18:11 ` Thomas Monjalon
2021-04-18 19:24 ` Gregory Etelson
2021-04-18 21:30 ` Thomas Monjalon
2021-04-18 15:51 ` [dpdk-dev] [PATCH v6 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Gregory Etelson
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 1/2] ethdev: " Gregory Etelson
2021-04-19 8:47 ` Thomas Monjalon
2021-04-19 8:29 ` [dpdk-dev] [PATCH v7 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 11:20 ` [dpdk-dev] [PATCH v7 0/2] add packet integrity checks Ferruh Yigit
2021-04-19 12:08 ` Gregory Etelson
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 1/2] ethdev: " Gregory Etelson
2021-04-19 14:09 ` Ajit Khaparde
2021-04-19 16:34 ` Thomas Monjalon
2021-04-19 17:06 ` Ferruh Yigit
2021-04-19 12:44 ` [dpdk-dev] [PATCH v8 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-19 14:09 ` Ajit Khaparde
2021-04-08 8:04 ` [dpdk-dev] [PATCH] ethdev: add packet integrity checks Andrew Rybchenko
2021-04-08 11:39 ` Ori Kam
2021-04-09 8:08 ` Andrew Rybchenko
2021-04-11 6:42 ` Ori Kam
2021-04-11 17:30 ` Ori Kam
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 0/2] " Gregory Etelson
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 1/2] ethdev: " Gregory Etelson
2021-04-12 17:36 ` Ferruh Yigit
2021-04-12 19:26 ` Ori Kam
2021-04-12 23:31 ` Ferruh Yigit
2021-04-13 7:12 ` Ori Kam
2021-04-13 8:03 ` Ferruh Yigit
2021-04-13 8:18 ` Ori Kam
2021-04-13 8:30 ` Ferruh Yigit
2021-04-13 10:21 ` Ori Kam
2021-04-13 17:28 ` Ferruh Yigit
2021-04-11 17:34 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: add support for integrity item Gregory Etelson
2021-04-12 17:49 ` Ferruh Yigit
2021-04-13 7:53 ` Ori Kam
2021-04-13 8:14 ` Ferruh Yigit
2021-04-13 11:36 ` Ori Kam
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).