patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: <getelson@nvidia.com>, <matan@nvidia.com>, <rasland@nvidia.com>,
	<stable@dpdk.org>, Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Subject: [PATCH] net/mlx5: fix flex item flow handle size
Date: Mon, 28 Feb 2022 12:01:24 +0200	[thread overview]
Message-ID: <20220228100124.7514-1-getelson@nvidia.com> (raw)

Reduce flex item flow handle size from 32 bits to 8 bits for each
flow.
The patch will save memory in setups with millions of flows.

Cc: stable@dpdk.org

Fixes: a23e9b6e3ee9 ("net/mlx5: handle flex item in flows")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.h    | 2 +-
 drivers/net/mlx5/mlx5_flow_dv.c | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index e510921a3f..484ce5791e 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -700,7 +700,6 @@ struct mlx5_flow_handle {
 	uint32_t split_flow_id:27; /**< Sub flow unique match flow id. */
 	uint32_t is_meter_flow_id:1; /**< Indicate if flow_id is for meter. */
 	uint32_t fate_action:3; /**< Fate action type. */
-	uint32_t flex_item; /**< referenced Flex Item bitmask. */
 	union {
 		uint32_t rix_hrxq; /**< Hash Rx queue object index. */
 		uint32_t rix_jump; /**< Index to the jump action resource. */
@@ -716,6 +715,7 @@ struct mlx5_flow_handle {
 #if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
 	struct mlx5_flow_handle_dv dvh;
 #endif
+	uint8_t flex_item; /**< referenced Flex Item bitmask. */
 } __rte_packed;
 
 /*
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2191ce6e58..fc4bcef6fb 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -10276,7 +10276,7 @@ flow_dv_translate_item_flex(struct rte_eth_dev *dev, void *matcher, void *key,
 		/* Don't count both inner and outer flex items in one rule. */
 		if (mlx5_flex_acquire_index(dev, spec->handle, true) != index)
 			MLX5_ASSERT(false);
-		dev_flow->handle->flex_item |= RTE_BIT32(index);
+		dev_flow->handle->flex_item |= (uint8_t)RTE_BIT32(index);
 	}
 	mlx5_flex_flow_translate_item(dev, matcher, key, item, is_inner);
 }
@@ -14627,7 +14627,7 @@ flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
 			int index = rte_bsf32(dev_handle->flex_item);
 
 			mlx5_flex_release_index(dev, index);
-			dev_handle->flex_item &= ~RTE_BIT32(index);
+			dev_handle->flex_item &= ~(uint8_t)RTE_BIT32(index);
 		}
 		if (dev_handle->dvh.matcher)
 			flow_dv_matcher_release(dev, dev_handle);
-- 
2.34.1


             reply	other threads:[~2022-02-28 10:01 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-28 10:01 Gregory Etelson [this message]
2022-03-02 16:31 ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220228100124.7514-1-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).