DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gavin Li <gavinl@nvidia.com>
To: <dev@dpdk.org>, <dsosnowski@nvidia.com>, <viacheslavo@nvidia.com>,
	<orika@nvidia.com>, <suanmingm@nvidia.com>, <matan@nvidia.com>
Cc: <jiaweiw@nvidia.com>, <rasland@nvidia.com>
Subject: [V1 1/2] net/mlx5: discover IPv6 traffic class support in RDMA core
Date: Fri, 12 Jan 2024 09:50:54 +0200	[thread overview]
Message-ID: <20240112075055.1288263-2-gavinl@nvidia.com> (raw)
In-Reply-To: <20240112075055.1288263-1-gavinl@nvidia.com>

Previously, IPv6 traffic class used the same ids of IPv4 DSCP and ECN by
rdam core and firmware. New FW support new IPv6 traffic class id which is
recommended to be used though the old way is still working.

FW exposed a new cap bit to indicate the supporting of the new id while
RDMA core does not have such mechanism.

To fix the backward compatibility issue of combination of RDMA core and FW
of different versions, a new function and a new flag were introduced to
check if the new IPv6 traffic class id is supported by RDMA core.

Signed-off-by: Gavin Li <gavinl@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
 drivers/net/mlx5/linux/mlx5_os.c |  4 +++
 drivers/net/mlx5/mlx5.h          |  1 +
 drivers/net/mlx5/mlx5_flow.c     | 42 ++++++++++++++++++++++++++++++++
 drivers/net/mlx5/mlx5_flow.h     |  1 +
 4 files changed, 48 insertions(+)

diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index ae82e1e5d8..5ae31c88f4 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1602,6 +1602,10 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
 			goto error;
 	}
 	rte_rwlock_init(&priv->ind_tbls_lock);
+	if (sh->config.dv_flow_en == 1 &&
+	    !priv->sh->ipv6_tc_fallback &&
+	    mlx5_flow_discover_ipv6_tc_support(eth_dev))
+		priv->sh->ipv6_tc_fallback = 1;
 	if (priv->sh->config.dv_flow_en == 2) {
 #ifdef HAVE_MLX5_HWS_SUPPORT
 		if (priv->sh->config.dv_esw_en) {
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 263ebead7f..779805bcd8 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1444,6 +1444,7 @@ struct mlx5_dev_ctx_shared {
 	uint32_t lag_rx_port_affinity_en:1;
 	/* lag_rx_port_affinity is supported. */
 	uint32_t hws_max_log_bulk_sz:5;
+	uint32_t ipv6_tc_fallback:1;
 	/* Log of minimal HWS counters created hard coded. */
 	uint32_t hws_max_nb_counters; /* Maximal number for HWS counters. */
 	uint32_t max_port; /* Maximal IB device port index. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 85e8c77c81..90b72b7b0a 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -12476,3 +12476,45 @@ mlx5_flow_pick_transfer_proxy(struct rte_eth_dev *dev,
 				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
 				  NULL, "unable to find a proxy port");
 }
+
+/**
+ * Discover ipv6 traffic class id support in rdma core and firmware.
+ *
+ * @param dev
+ *   Ethernet device.
+ *
+ * @return
+ *   0, rdma core is good to work with firmware.
+ *   -EOPNOTSUPP, rdma core could not work with new ipv6 tc id.
+ */
+int
+mlx5_flow_discover_ipv6_tc_support(struct rte_eth_dev *dev)
+{
+	struct rte_flow_action_set_dscp set_dscp;
+	struct rte_flow_attr attr;
+	struct rte_flow_action actions[2];
+	struct rte_flow_item items[3];
+	struct rte_flow_error error;
+	uint32_t flow_idx;
+
+	memset(&attr, 0, sizeof(attr));
+	memset(actions, 0, sizeof(actions));
+	memset(items, 0, sizeof(items));
+	attr.group = 1;
+	attr.egress = 1;
+	items[0].type = RTE_FLOW_ITEM_TYPE_ETH;
+	items[1].type = RTE_FLOW_ITEM_TYPE_IPV6;
+	items[2].type = RTE_FLOW_ITEM_TYPE_END;
+	/* Random value */
+	set_dscp.dscp = 9;
+	actions[0].type = RTE_FLOW_ACTION_TYPE_SET_IPV6_DSCP;
+	actions[0].conf = &set_dscp;
+	actions[1].type = RTE_FLOW_ACTION_TYPE_END;
+
+	flow_idx = flow_list_create(dev, MLX5_FLOW_TYPE_GEN, &attr, items, actions, true, &error);
+	if (!flow_idx)
+		return -EOPNOTSUPP;
+
+	flow_list_destroy(dev, MLX5_FLOW_TYPE_GEN, flow_idx);
+	return 0;
+}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 120609c595..33d4a28077 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2638,6 +2638,7 @@ void mlx5_flow_destroy_sub_policy_with_rxq(struct rte_eth_dev *dev,
 		struct mlx5_flow_meter_policy *mtr_policy);
 int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev);
 int mlx5_flow_discover_dr_action_support(struct rte_eth_dev *dev);
+int mlx5_flow_discover_ipv6_tc_support(struct rte_eth_dev *dev);
 int mlx5_action_handle_attach(struct rte_eth_dev *dev);
 int mlx5_action_handle_detach(struct rte_eth_dev *dev);
 int mlx5_action_handle_flush(struct rte_eth_dev *dev);
-- 
2.39.1


  reply	other threads:[~2024-01-12  7:51 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-01-12  7:50 [V1 0/2] use traffic class PRM field for IPv6 modification Gavin Li
2024-01-12  7:50 ` Gavin Li [this message]
2024-01-12  7:50 ` [V1 2/2] net/mlx5: " Gavin Li
2024-01-17 13:41 ` [V1 0/2] " Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240112075055.1288263-2-gavinl@nvidia.com \
    --to=gavinl@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=jiaweiw@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).