DPDK patches and discussions
 help / color / mirror / Atom feed
* [PATCH] net/mlx5: reject negative integrity item configuration
@ 2022-07-03  8:02 Gregory Etelson
  2022-07-03  8:08 ` [PATCH v2] " Gregory Etelson
  2022-07-04 10:11 ` [PATCH v4] " Gregory Etelson
  0 siblings, 2 replies; 4+ messages in thread
From: Gregory Etelson @ 2022-07-03  8:02 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko

Negative integrity item refers to condition when the item value mask
is set, but value spec is cleared:
    ... integrity value mask l4_ok value spec 0 ...

RTE library defines integrity bits `l3_ok` and `l4_ok` as accumulators
for all hardware L3 and L4 integrity verifications respectfully.
Hardware `l3_ok` and `l4_ok` integrity bits refer to L3 and L4
network headers only.
Integrity bits `l3_ok` and `l4_ok` are not compatible between RTE
library and hardware.

PMD translations for RTE `l3_ok` are:
 IPv4: `l3_ok` and `l3_csum_ok`
 IPv6: `l3_ok`
RTE `l4_ok` is translated into PMD `l4_ok` and `l4_csum_ok` bits.

Positive IPv4 `l3_ok` flow item configuration is translated into
a single matcher that AND corresponding hardware bits.
Negative IPv4 `l3_ok` is translated into 2 hardware conditions where
each condition probes a single integrity bit:
  RTE::l3_ok is 0 => MLX5::l3_ok is 0 OR MLX5:l3_csum_ok is 0
MLX5 hardware does not do OR condition in flow rule item.
Negative IPv4 `l3_ok` must be translated into 2 flow rules.
Similarly negative RTE `l4_ok` condition is also translated into 2
hardware rules.

Current PMD roadmap does not allow implicit flow rule split.

TODO: extend RTE integrity bits definition to allow match on each
hardware integrity bit for accumulated integiry matches.

Bugzilla ID: 948

cc: stable@dpdk.com

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
 doc/guides/nics/mlx5.rst        | 5 +++--
 drivers/net/mlx5/mlx5_flow_dv.c | 6 ++++++
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 9f2832e284..99734157d0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -479,14 +479,15 @@ Limitations
   - Integrity offload is enabled starting from **ConnectX-6 Dx**.
   - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
   - ``level`` value 0 references outer headers.
+  - Negative integrity item verification is not supported
   - Multiple integrity items not supported in a single flow rule.
   - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
     For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
     TCP or UDP, must be in the rule pattern as well::
 
       flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
-      or
-      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
+
+      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec l4_ok / eth / ipv4 proto is udp / end …
 
 - Connection tracking:
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 09349a021b..bee9363515 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6779,6 +6779,12 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ITEM,
 					  integrity_item,
 					  "unsupported integrity filter");
+	if ((mask->l3_ok & !spec->l3_ok) || (mask->l4_ok & !spec->l4_ok) ||
+		(mask->ipv4_csum_ok & !spec->ipv4_csum_ok) ||
+		(mask->l4_csum_ok & !spec->l4_csum_ok))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  NULL, "negative integrity flow is not supported");
 	if (spec->level > 1) {
 		if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY)
 			return rte_flow_error_set
-- 
2.34.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v2] net/mlx5: reject negative integrity item configuration
  2022-07-03  8:02 [PATCH] net/mlx5: reject negative integrity item configuration Gregory Etelson
@ 2022-07-03  8:08 ` Gregory Etelson
  2022-07-04 10:11 ` [PATCH v4] " Gregory Etelson
  1 sibling, 0 replies; 4+ messages in thread
From: Gregory Etelson @ 2022-07-03  8:08 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko

Negative integrity item refers to condition when the item value mask
is set, but value spec is cleared:
    ... integrity value mask l4_ok value spec 0 ...

RTE library defines integrity bits `l3_ok` and `l4_ok` as accumulators
for all hardware L3 and L4 integrity verifications respectfully.
Hardware `l3_ok` and `l4_ok` integrity bits refer to L3 and L4
network headers only.
Integrity bits `l3_ok` and `l4_ok` are not compatible between RTE
library and hardware.

PMD translations for RTE `l3_ok` are:
 IPv4: `l3_ok` and `l3_csum_ok`
 IPv6: `l3_ok`
RTE `l4_ok` is translated into PMD `l4_ok` and `l4_csum_ok` bits.

Positive IPv4 `l3_ok` flow item configuration is translated into
a single matcher that AND corresponding hardware bits.
Negative IPv4 `l3_ok` is translated into 2 hardware conditions where
each condition probes a single integrity bit:
  RTE::l3_ok is 0 => MLX5::l3_ok is 0 OR MLX5:l3_csum_ok is 0
MLX5 hardware does not do OR condition in flow rule item.
Negative IPv4 `l3_ok` must be translated into 2 flow rules.
Similarly negative RTE `l4_ok` condition is also translated into 2
hardware rules.

Current PMD roadmap does not allow implicit flow rule split.

TODO: extend RTE integrity bits definition to allow match on each
hardware integrity bit for accumulated integiry matches.

Bugzilla ID: 948

cc: stable@dpdk.org

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
v2: fix typo in cc address 
---
 doc/guides/nics/mlx5.rst        | 5 +++--
 drivers/net/mlx5/mlx5_flow_dv.c | 6 ++++++
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 9f2832e284..99734157d0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -479,14 +479,15 @@ Limitations
   - Integrity offload is enabled starting from **ConnectX-6 Dx**.
   - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
   - ``level`` value 0 references outer headers.
+  - Negative integrity item verification is not supported
   - Multiple integrity items not supported in a single flow rule.
   - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
     For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
     TCP or UDP, must be in the rule pattern as well::
 
       flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
-      or
-      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
+
+      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec l4_ok / eth / ipv4 proto is udp / end …
 
 - Connection tracking:
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 09349a021b..bee9363515 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6779,6 +6779,12 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ITEM,
 					  integrity_item,
 					  "unsupported integrity filter");
+	if ((mask->l3_ok & !spec->l3_ok) || (mask->l4_ok & !spec->l4_ok) ||
+		(mask->ipv4_csum_ok & !spec->ipv4_csum_ok) ||
+		(mask->l4_csum_ok & !spec->l4_csum_ok))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  NULL, "negative integrity flow is not supported");
 	if (spec->level > 1) {
 		if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY)
 			return rte_flow_error_set
-- 
2.34.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v4] net/mlx5: reject negative integrity item configuration
  2022-07-03  8:02 [PATCH] net/mlx5: reject negative integrity item configuration Gregory Etelson
  2022-07-03  8:08 ` [PATCH v2] " Gregory Etelson
@ 2022-07-04 10:11 ` Gregory Etelson
  2022-07-04 16:23   ` Raslan Darawsheh
  1 sibling, 1 reply; 4+ messages in thread
From: Gregory Etelson @ 2022-07-04 10:11 UTC (permalink / raw)
  To: dev; +Cc: getelson, matan, rasland, Raja Zidane, stable, Viacheslav Ovsiienko

From: Raja Zidane <rzidane@nvidia.com>

Negative integrity item refers to condition when the item value mask
is set, but value spec is cleared:
    ... integrity value mask l4_ok value spec 0 ...

RTE library defines integrity bits `l3_ok` and `l4_ok` as accumulators
for all hardware L3 and L4 integrity verifications respectfully.
Hardware `l3_ok` and `l4_ok` integrity bits refer to L3 and L4
network headers only.
Integrity bits `l3_ok` and `l4_ok` are not compatible between RTE
library and hardware.

PMD translations for RTE `l3_ok` are:
 IPv4: `l3_ok` and `l3_csum_ok`
 IPv6: `l3_ok`
RTE `l4_ok` is translated into PMD `l4_ok` and `l4_csum_ok` bits.

Positive IPv4 `l3_ok` flow item configuration is translated into
a single matcher that AND corresponding hardware bits.
Negative IPv4 `l3_ok` is translated into 2 hardware conditions where
each condition probes a single integrity bit:
  RTE::l3_ok is 0 => MLX5::l3_ok is 0 OR MLX5:l3_csum_ok is 0
MLX5 hardware does not do OR condition in flow rule item.
Negative IPv4 `l3_ok` must be translated into 2 flow rules.
Similarly negative RTE `l4_ok` condition is also translated into 2
hardware rules.

Current PMD roadmap does not allow implicit flow rule split.

TODO: extend RTE integrity bits definition to allow match on each
hardware integrity bit for accumulated integiry matches.

Bugzilla ID: 948

cc: stable@dpdk.org

Proposed-off-by: Raja Zidane rzidane@nvidia.com
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v2: fix typo in cc address.
v3:
V4: fix author and version id. 
---
 doc/guides/nics/mlx5.rst        | 5 +++--
 drivers/net/mlx5/mlx5_flow_dv.c | 6 ++++++
 2 files changed, 9 insertions(+), 2 deletions(-)

diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 9f2832e284..99734157d0 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -479,14 +479,15 @@ Limitations
   - Integrity offload is enabled starting from **ConnectX-6 Dx**.
   - Verification bits provided by the hardware are ``l3_ok``, ``ipv4_csum_ok``, ``l4_ok``, ``l4_csum_ok``.
   - ``level`` value 0 references outer headers.
+  - Negative integrity item verification is not supported
   - Multiple integrity items not supported in a single flow rule.
   - Flow rule items supplied by application must explicitly specify network headers referred by integrity item.
     For example, if integrity item mask sets ``l4_ok`` or ``l4_csum_ok`` bits, reference to L4 network header,
     TCP or UDP, must be in the rule pattern as well::
 
       flow create 0 ingress pattern integrity level is 0 value mask l3_ok value spec l3_ok / eth / ipv6 / end …
-      or
-      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec 0 / eth / ipv4 proto is udp / end …
+
+      flow create 0 ingress pattern integrity level is 0 value mask l4_ok value spec l4_ok / eth / ipv4 proto is udp / end …
 
 - Connection tracking:
 
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 09349a021b..bee9363515 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6779,6 +6779,12 @@ flow_dv_validate_item_integrity(struct rte_eth_dev *dev,
 					  RTE_FLOW_ERROR_TYPE_ITEM,
 					  integrity_item,
 					  "unsupported integrity filter");
+	if ((mask->l3_ok & !spec->l3_ok) || (mask->l4_ok & !spec->l4_ok) ||
+		(mask->ipv4_csum_ok & !spec->ipv4_csum_ok) ||
+		(mask->l4_csum_ok & !spec->l4_csum_ok))
+		return rte_flow_error_set(error, EINVAL,
+					  RTE_FLOW_ERROR_TYPE_ITEM,
+					  NULL, "negative integrity flow is not supported");
 	if (spec->level > 1) {
 		if (pattern_flags & MLX5_FLOW_ITEM_INNER_INTEGRITY)
 			return rte_flow_error_set
-- 
2.34.1


^ permalink raw reply	[flat|nested] 4+ messages in thread

* RE: [PATCH v4] net/mlx5: reject negative integrity item configuration
  2022-07-04 10:11 ` [PATCH v4] " Gregory Etelson
@ 2022-07-04 16:23   ` Raslan Darawsheh
  0 siblings, 0 replies; 4+ messages in thread
From: Raslan Darawsheh @ 2022-07-04 16:23 UTC (permalink / raw)
  To: Gregory Etelson, dev; +Cc: Matan Azrad, Raja Zidane, stable, Slava Ovsiienko

Hi,

> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Monday, July 4, 2022 1:12 PM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; Raja
> Zidane <rzidane@nvidia.com>; stable@dpdk.org; Slava Ovsiienko
> <viacheslavo@nvidia.com>
> Subject: [PATCH v4] net/mlx5: reject negative integrity item configuration
> 
> From: Raja Zidane <rzidane@nvidia.com>
> 
> Negative integrity item refers to condition when the item value mask
> is set, but value spec is cleared:
>     ... integrity value mask l4_ok value spec 0 ...
> 
> RTE library defines integrity bits `l3_ok` and `l4_ok` as accumulators
> for all hardware L3 and L4 integrity verifications respectfully.
> Hardware `l3_ok` and `l4_ok` integrity bits refer to L3 and L4
> network headers only.
> Integrity bits `l3_ok` and `l4_ok` are not compatible between RTE
> library and hardware.
> 
> PMD translations for RTE `l3_ok` are:
>  IPv4: `l3_ok` and `l3_csum_ok`
>  IPv6: `l3_ok`
> RTE `l4_ok` is translated into PMD `l4_ok` and `l4_csum_ok` bits.
> 
> Positive IPv4 `l3_ok` flow item configuration is translated into
> a single matcher that AND corresponding hardware bits.
> Negative IPv4 `l3_ok` is translated into 2 hardware conditions where
> each condition probes a single integrity bit:
>   RTE::l3_ok is 0 => MLX5::l3_ok is 0 OR MLX5:l3_csum_ok is 0
> MLX5 hardware does not do OR condition in flow rule item.
> Negative IPv4 `l3_ok` must be translated into 2 flow rules.
> Similarly negative RTE `l4_ok` condition is also translated into 2
> hardware rules.
> 
> Current PMD roadmap does not allow implicit flow rule split.
> 
> TODO: extend RTE integrity bits definition to allow match on each
> hardware integrity bit for accumulated integiry matches.
> 
> Bugzilla ID: 948
> 
> cc: stable@dpdk.org
> 
> Proposed-off-by: Raja Zidane rzidane@nvidia.com
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

Patch applied to next-net-mlx,

Kindest regards,
Raslan Darawsheh

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-07-04 16:23 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-03  8:02 [PATCH] net/mlx5: reject negative integrity item configuration Gregory Etelson
2022-07-03  8:08 ` [PATCH v2] " Gregory Etelson
2022-07-04 10:11 ` [PATCH v4] " Gregory Etelson
2022-07-04 16:23   ` Raslan Darawsheh

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).