DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bing Zhao <bingz@nvidia.com>
To: viacheslavo@mellanox.com, matan@mellanox.com
Cc: dev@dpdk.org, orika@nvidia.com, rasland@nvidia.com
Subject: [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode
Date: Thu, 22 Oct 2020 22:06:37 +0800	[thread overview]
Message-ID: <1603375597-430528-7-git-send-email-bingz@nvidia.com> (raw)
In-Reply-To: <1603375597-430528-1-git-send-email-bingz@nvidia.com>

In the current implementation, the hairpin flow will be split into
two flows implicitly if there is some action that only belongs to the
TX part. A TX device flow will be inserted by the mlx5 PMD itself.

In hairpin between two ports, the explicit TX flow mode will be the
only one to be supported. It is not the appropriate behavior to
insert a TX flow into another device implicitly. The application
could create any flow as it likes and has full control of the user
flows. Hairpin flows will have no difference from standard flows and
the application can decide how to chain RX and TX flows together.

Even in the single port hairpin, this explicit TX flow mode could
also be supported.

When checking if the hairpin needs to be split, just return if the
hairpin queue is with "tx_explicit" attribute. Then in the following
steps for validation and translation, the code path will be the same
as that for standard flows.

Signed-off-by: Bing Zhao <bingz@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d7243a8..8a114a6 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -3261,6 +3261,7 @@ struct mlx5_flow_tunnel_info {
 	const struct rte_flow_action_queue *queue;
 	const struct rte_flow_action_rss *rss;
 	const struct rte_flow_action_raw_encap *raw_encap;
+	const struct rte_eth_hairpin_conf *conf;
 
 	if (!attr->ingress)
 		return 0;
@@ -3273,6 +3274,9 @@ struct mlx5_flow_tunnel_info {
 			if (mlx5_rxq_get_type(dev, queue->index) !=
 			    MLX5_RXQ_TYPE_HAIRPIN)
 				return 0;
+			conf = mlx5_rxq_get_hairpin_conf(dev, queue->index);
+			if (!!conf->tx_explicit)
+				return 0;
 			queue_action = 1;
 			action_n++;
 			break;
@@ -3283,6 +3287,9 @@ struct mlx5_flow_tunnel_info {
 			if (mlx5_rxq_get_type(dev, rss->queue[0]) !=
 			    MLX5_RXQ_TYPE_HAIRPIN)
 				return 0;
+			conf = mlx5_rxq_get_hairpin_conf(dev, rss->queue[0]);
+			if (conf != NULL && !!conf->tx_explicit)
+				return 0;
 			queue_action = 1;
 			action_n++;
 			break;
-- 
1.8.3.1


  parent reply	other threads:[~2020-10-22 14:08 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-10-08 14:16 [dpdk-dev] [PATCH 0/4] add two ports hairpin mode support in mlx5 PMD Bing Zhao
2020-10-08 14:16 ` [dpdk-dev] [PATCH 1/4] net/mlx5: remove hairpin queue peer port checking Bing Zhao
2020-10-08 14:16 ` [dpdk-dev] [PATCH 2/4] net/mlx5: add support for two ports hairpin mode Bing Zhao
2020-10-08 14:16 ` [dpdk-dev] [PATCH 3/4] net/mlx5: conditional hairpin auto bind Bing Zhao
2020-10-08 14:17 ` [dpdk-dev] [PATCH 4/4] doc: update hairpin support for mlx5 driver Bing Zhao
2020-10-22 14:06 ` [dpdk-dev] [PATCH v2 0/6] add two ports hairpin mode support in mlx5 PMD Bing Zhao
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 1/6] net/mlx5: change hairpin queue peer checking Bing Zhao
2020-10-26  9:28     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 2/6] net/mlx5: add support for two ports hairpin mode Bing Zhao
2020-10-26  9:29     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 3/6] net/mlx5: add support to get hairpin peer ports Bing Zhao
2020-10-26  9:29     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 4/6] net/mlx5: conditional hairpin auto bind Bing Zhao
2020-10-26  9:29     ` Slava Ovsiienko
2020-10-22 14:06   ` [dpdk-dev] [PATCH v2 5/6] net/mlx5: change hairpin ingress flow validation Bing Zhao
2020-10-26  9:30     ` Slava Ovsiienko
2020-10-22 14:06   ` Bing Zhao [this message]
2020-10-26  9:30     ` [dpdk-dev] [PATCH v2 6/6] net/mlx5: not split hairpin flow in explicit mode Slava Ovsiienko
2020-10-26 16:37 ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: change hairpin queue peer checking Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add support for two ports hairpin mode Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 3/7] net/mlx5: add support to get hairpin peer ports Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 4/7] net/mlx5: conditional hairpin auto bind Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: change hairpin ingress flow validation Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: not split hairpin flow in explicit mode Bing Zhao
2020-10-26 16:37   ` [dpdk-dev] [PATCH v3 7/7] doc: update mlx5 hairpin support and limitations Bing Zhao
2020-10-26 16:44     ` Slava Ovsiienko
2020-10-26 22:42   ` [dpdk-dev] [PATCH v3 0/7] add two ports hairpin mode support in mlx5 PMD Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1603375597-430528-7-git-send-email-bingz@nvidia.com \
    --to=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=viacheslavo@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).