patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Bing Zhao <bingz@nvidia.com>
To: <viacheslavo@nvidia.com>, <dev@dpdk.org>, <rasland@nvidia.com>
Cc: <orika@nvidia.com>, <dsosnowski@nvidia.com>,
	<suanmingm@nvidia.com>, <matan@nvidia.com>,
	<michaelba@nvidia.com>, <stable@dpdk.org>
Subject: [PATCH] net/mlx5: fix age position in hairpin split
Date: Thu, 7 Mar 2024 10:09:24 +0200	[thread overview]
Message-ID: <20240307080924.310710-1-bingz@nvidia.com> (raw)

When splitting a hairpin rule implicitly, the count action will
be on either Tx or Rx subflow based on the encapsulation checking.

Once there is a flow rule with both count and age action, one counter
will be reused. If there is only age action and the ASO flow hit is
supported, the flow hit will be chosen instead of a counter.

In the previous flow splitting, the age would always be in the Rx
part, while the count would be on the Tx part when there is an encap.

Before this commit, 2 issues can be observed with a hairpin split:
  1. On the root table, one counter was used on both Rx and Tx parts
     for age and count actions. Then one ingress packet will be
     counted twice.
  2. On the non-root table, an extra ASO flow hit was used on the Rx
     part. This would cause some overhead.

The age and count actions should be in the same subflow instead of 2.

Fixes: daed4b6e3db2 ("net/mlx5: use aging by counter when counter exists")
Cc: michaelba@nvidia.com
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c    | 1 +
 drivers/net/mlx5/mlx5_flow_dv.c | 3 +--
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6484874c35..f31fdfbf3d 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -5399,6 +5399,7 @@ flow_hairpin_split(struct rte_eth_dev *dev,
 			}
 			break;
 		case RTE_FLOW_ACTION_TYPE_COUNT:
+		case RTE_FLOW_ACTION_TYPE_AGE:
 			if (encap) {
 				rte_memcpy(actions_tx, actions,
 					   sizeof(struct rte_flow_action));
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 18f09b22be..df23c8eb6c 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -19363,8 +19363,7 @@ flow_dv_get_aged_flows(struct rte_eth_dev *dev,
 	LIST_FOREACH(act, &age_info->aged_aso, next) {
 		nb_flows++;
 		if (nb_contexts) {
-			context[nb_flows - 1] =
-						act->age_params.context;
+			context[nb_flows - 1] = act->age_params.context;
 			if (!(--nb_contexts))
 				break;
 		}
-- 
2.34.1


             reply	other threads:[~2024-03-07  8:10 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-03-07  8:09 Bing Zhao [this message]
2024-03-13  7:48 ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240307080924.310710-1-bingz@nvidia.com \
    --to=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=michaelba@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).