DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: <getelson@nvidia.com>, <matan@nvidia.com>, <rasland@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	Shahaf Shuler <shahafs@nvidia.com>
Subject: [dpdk-dev] [PATCH 2/3] net/mlx5: add flow rule match for IPv4 IHL field
Date: Wed, 30 Jun 2021 10:04:51 +0300	[thread overview]
Message-ID: <20210630070452.14055-3-getelson@nvidia.com> (raw)
In-Reply-To: <20210630070452.14055-1-getelson@nvidia.com>

Provide flow rules capability to match on IPv4 IHL field.
Minimal HCA firmware version requiredto offload IPv4 IHL is
xx_30_2000.

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 31 +++++++++++++++++++++++--------
 1 file changed, 23 insertions(+), 8 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index c5d4b01e57..155f686ad1 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -2451,19 +2451,19 @@ flow_dv_validate_item_gtp_psc(const struct rte_flow_item *item,
  *   0 on success, a negative errno value otherwise and rte_errno is set.
  */
 static int
-flow_dv_validate_item_ipv4(const struct rte_flow_item *item,
-			   uint64_t item_flags,
-			   uint64_t last_item,
-			   uint16_t ether_type,
-			   struct rte_flow_error *error)
+flow_dv_validate_item_ipv4(struct rte_eth_dev *dev,
+			   const struct rte_flow_item *item,
+			   uint64_t item_flags, uint64_t last_item,
+			   uint16_t ether_type, struct rte_flow_error *error)
 {
 	int ret;
+	struct mlx5_priv *priv = dev->data->dev_private;
 	const struct rte_flow_item_ipv4 *spec = item->spec;
 	const struct rte_flow_item_ipv4 *last = item->last;
 	const struct rte_flow_item_ipv4 *mask = item->mask;
 	rte_be16_t fragment_offset_spec = 0;
 	rte_be16_t fragment_offset_last = 0;
-	const struct rte_flow_item_ipv4 nic_ipv4_mask = {
+	struct rte_flow_item_ipv4 nic_ipv4_mask = {
 		.hdr = {
 			.src_addr = RTE_BE32(0xffffffff),
 			.dst_addr = RTE_BE32(0xffffffff),
@@ -2474,6 +2474,17 @@ flow_dv_validate_item_ipv4(const struct rte_flow_item *item,
 		},
 	};
 
+	if (mask && (mask->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK)) {
+		int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+		bool ihl_cap = !tunnel ? priv->config.hca_attr.outer_ipv4_ihl :
+			       priv->config.hca_attr.inner_ipv4_ihl;
+		if (!ihl_cap)
+			return rte_flow_error_set(error, ENOTSUP,
+						  RTE_FLOW_ERROR_TYPE_ITEM,
+						  item,
+						  "IPV4 ihl offload not supported");
+		nic_ipv4_mask.hdr.version_ihl = mask->hdr.version_ihl;
+	}
 	ret = mlx5_flow_validate_item_ipv4(item, item_flags, last_item,
 					   ether_type, &nic_ipv4_mask,
 					   MLX5_ITEM_RANGE_ACCEPTED, error);
@@ -6771,7 +6782,7 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
 		case RTE_FLOW_ITEM_TYPE_IPV4:
 			mlx5_flow_tunnel_ip_check(items, next_protocol,
 						  &item_flags, &tunnel);
-			ret = flow_dv_validate_item_ipv4(items, item_flags,
+			ret = flow_dv_validate_item_ipv4(dev, items, item_flags,
 							 last_item, ether_type,
 							 error);
 			if (ret < 0)
@@ -8154,7 +8165,7 @@ flow_dv_translate_item_ipv4(void *matcher, void *key,
 	void *headers_v;
 	char *l24_m;
 	char *l24_v;
-	uint8_t tos;
+	uint8_t tos, ihl_m, ihl_v;
 
 	if (inner) {
 		headers_m = MLX5_ADDR_OF(fte_match_param, matcher,
@@ -8183,6 +8194,10 @@ flow_dv_translate_item_ipv4(void *matcher, void *key,
 	*(uint32_t *)l24_m = ipv4_m->hdr.src_addr;
 	*(uint32_t *)l24_v = ipv4_m->hdr.src_addr & ipv4_v->hdr.src_addr;
 	tos = ipv4_m->hdr.type_of_service & ipv4_v->hdr.type_of_service;
+	ihl_m = ipv4_m->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK;
+	ihl_v = ipv4_v->hdr.version_ihl & RTE_IPV4_HDR_IHL_MASK;
+	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ipv4_ihl, ihl_m);
+	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ipv4_ihl, ihl_m & ihl_v);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_m, ip_ecn,
 		 ipv4_m->hdr.type_of_service);
 	MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_ecn, tos);
-- 
2.31.1


  parent reply	other threads:[~2021-06-30  7:05 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-30  7:04 [dpdk-dev] [PATCH 0/3] " Gregory Etelson
2021-06-30  7:04 ` [dpdk-dev] [PATCH 1/3] common/mlx5: query for hardware capability to offload " Gregory Etelson
2021-06-30  7:04 ` Gregory Etelson [this message]
2021-06-30  7:04 ` [dpdk-dev] [PATCH 3/3] app/testpmd: add flow item to match on IPv4 version_ihl field Gregory Etelson
2021-07-05 11:40 ` [dpdk-dev] [PATCH v2 0/2] net/mlx5: add flow rule match for IPv4 IHL field Gregory Etelson
2021-07-05 11:40   ` [dpdk-dev] [PATCH v2 1/2] common/mlx5: query for hardware capability to offload " Gregory Etelson
2021-07-05 11:40   ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: add flow rule match for " Gregory Etelson
2021-07-13 19:09   ` [dpdk-dev] [PATCH v2 0/2] " Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210630070452.14055-3-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=shahafs@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).