DPDK patches and discussions
 help / color / mirror / Atom feed
From: Alex Vesker <valex@nvidia.com>
To: <valex@nvidia.com>, <viacheslavo@nvidia.com>,
	<thomas@monjalon.net>, "Matan Azrad" <matan@nvidia.com>
Cc: <dev@dpdk.org>, <orika@nvidia.com>
Subject: [v1 07/16] net/mlx5/hws: add send FW range STE WQE
Date: Tue, 31 Jan 2023 11:33:36 +0200	[thread overview]
Message-ID: <20230131093346.1261066-8-valex@nvidia.com> (raw)
In-Reply-To: <20230131093346.1261066-1-valex@nvidia.com>

FW WQE supports complex rules, constructed from 2 STEs,
for example:
        Hash(DefinerA)
        SteMatch(DefinerB)
        SteRange(DefinerC)
        DefinerA is a subset of DefinerB

This complex rule is written using a single FW command which
has a single WQE control, STE match data0 and STE range data1.
FW manages STEs/ICM and coherency between deletion and creation.
It is possible to also pass the definer value as part of the
STE, this is not supported with current HW.

Signed-off-by: Alex Vesker <valex@nvidia.com>
---
 drivers/net/mlx5/hws/mlx5dr_send.c | 19 +++++++++++++++++++
 drivers/net/mlx5/hws/mlx5dr_send.h |  3 +++
 2 files changed, 22 insertions(+)

diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
index a9958df4f2..51aaf5c8e2 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.c
+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
@@ -242,11 +242,15 @@ int mlx5dr_send_wqe_fw(struct ibv_context *ibv_ctx,
 		       struct mlx5dr_wqe_gta_ctrl_seg *send_wqe_ctrl,
 		       void *send_wqe_match_data,
 		       void *send_wqe_match_tag,
+		       void *send_wqe_range_data,
+		       void *send_wqe_range_tag,
 		       bool is_jumbo,
 		       uint8_t gta_opcode)
 {
+	bool has_range = send_wqe_range_data || send_wqe_range_tag;
 	bool has_match = send_wqe_match_data || send_wqe_match_tag;
 	struct mlx5dr_wqe_gta_data_seg_ste gta_wqe_data0 = {0};
+	struct mlx5dr_wqe_gta_data_seg_ste gta_wqe_data1 = {0};
 	struct mlx5dr_wqe_gta_ctrl_seg gta_wqe_ctrl = {0};
 	struct mlx5dr_cmd_generate_wqe_attr attr = {0};
 	struct mlx5dr_wqe_ctrl_seg wqe_ctrl = {0};
@@ -278,6 +282,17 @@ int mlx5dr_send_wqe_fw(struct ibv_context *ibv_ctx,
 		attr.gta_data_0 = (uint8_t *)&gta_wqe_data0;
 	}
 
+	/* Set GTA range WQE DATA */
+	if (has_range) {
+		if (send_wqe_range_data)
+			memcpy(&gta_wqe_data1, send_wqe_range_data, sizeof(gta_wqe_data1));
+		else
+			mlx5dr_send_wqe_set_tag(&gta_wqe_data1, send_wqe_range_tag, false);
+
+		gta_wqe_data1.rsvd1_definer = htobe32(send_attr->range_definer_id << 8);
+		attr.gta_data_1 = (uint8_t *)&gta_wqe_data1;
+	}
+
 	attr.pdn = pd_num;
 	attr.wqe_ctrl = (uint8_t *)&wqe_ctrl;
 	attr.gta_ctrl = (uint8_t *)&gta_wqe_ctrl;
@@ -336,6 +351,8 @@ void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue,
 					 ste_attr->wqe_ctrl,
 					 ste_attr->wqe_data,
 					 ste_attr->wqe_tag,
+					 ste_attr->range_wqe_data,
+					 ste_attr->range_wqe_tag,
 					 ste_attr->wqe_tag_is_jumbo,
 					 ste_attr->gta_opcode);
 		if (ret)
@@ -350,6 +367,8 @@ void mlx5dr_send_stes_fw(struct mlx5dr_send_engine *queue,
 					 ste_attr->wqe_ctrl,
 					 ste_attr->wqe_data,
 					 ste_attr->wqe_tag,
+					 ste_attr->range_wqe_data,
+					 ste_attr->range_wqe_tag,
 					 ste_attr->wqe_tag_is_jumbo,
 					 ste_attr->gta_opcode);
 		if (ret)
diff --git a/drivers/net/mlx5/hws/mlx5dr_send.h b/drivers/net/mlx5/hws/mlx5dr_send.h
index 1e845b1c7a..47bb66b3c7 100644
--- a/drivers/net/mlx5/hws/mlx5dr_send.h
+++ b/drivers/net/mlx5/hws/mlx5dr_send.h
@@ -161,6 +161,7 @@ struct mlx5dr_send_engine_post_attr {
 	uint8_t notify_hw;
 	uint8_t fence;
 	uint8_t match_definer_id;
+	uint8_t range_definer_id;
 	size_t len;
 	struct mlx5dr_rule *rule;
 	uint32_t id;
@@ -182,8 +183,10 @@ struct mlx5dr_send_ste_attr {
 	uint32_t direct_index;
 	struct mlx5dr_send_engine_post_attr send_attr;
 	struct mlx5dr_rule_match_tag *wqe_tag;
+	struct mlx5dr_rule_match_tag *range_wqe_tag;
 	struct mlx5dr_wqe_gta_ctrl_seg *wqe_ctrl;
 	struct mlx5dr_wqe_gta_data_seg_ste *wqe_data;
+	struct mlx5dr_wqe_gta_data_seg_ste *range_wqe_data;
 };
 
 /**
-- 
2.18.1


  parent reply	other threads:[~2023-01-31  9:35 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-31  9:33 [v1 00/16] net/mlx5/hws: support range and partial hash matching Alex Vesker
2023-01-31  9:33 ` [v1 01/16] net/mlx5/hws: support synchronous drain Alex Vesker
2023-01-31  9:33 ` [v1 02/16] net/mlx5/hws: matcher remove AT and MT limitation Alex Vesker
2023-01-31  9:33 ` [v1 03/16] net/mlx5/hws: support GTA WQE write using FW command Alex Vesker
2023-01-31  9:33 ` [v1 04/16] net/mlx5/hws: add capability query for gen wqe command Alex Vesker
2023-01-31  9:33 ` [v1 05/16] net/mlx5/hws: align RTC create command with PRM format Alex Vesker
2023-01-31  9:33 ` [v1 06/16] net/mlx5/hws: add send FW match STE using gen WQE Alex Vesker
2023-01-31  9:33 ` Alex Vesker [this message]
2023-01-31  9:33 ` [v1 08/16] net/mlx5/hws: move matcher size check to function Alex Vesker
2023-01-31  9:33 ` [v1 09/16] net/mlx5/hws: support range match Alex Vesker
2023-01-31  9:33 ` [v1 10/16] net/mlx5/hws: redesign definer create Alex Vesker
2023-01-31  9:33 ` [v1 11/16] net/mlx5/hws: support partial hash Alex Vesker
2023-01-31  9:33 ` [v1 12/16] net/mlx5/hws: add range definer creation support Alex Vesker
2023-01-31  9:33 ` [v1 13/16] net/mlx5/hws: add FW WQE rule creation logic Alex Vesker
2023-01-31  9:33 ` [v1 14/16] net/mlx5/hws: add debug dump support for range and hash Alex Vesker
2023-01-31  9:33 ` [v1 15/16] net/mlx5/hws: rename pattern cache object Alex Vesker
2023-01-31  9:33 ` [v1 16/16] net/mlx5/hws: cache definer for reuse Alex Vesker
2023-02-01  7:27 ` [v2 00/16] net/mlx5/hws: support range and partial hash matching Alex Vesker
2023-02-01  7:28   ` [v2 01/16] net/mlx5/hws: support synchronous drain Alex Vesker
2023-02-01  7:28   ` [v2 02/16] net/mlx5/hws: matcher remove AT and MT limitation Alex Vesker
2023-02-01  7:28   ` [v2 03/16] net/mlx5/hws: support GTA WQE write using FW command Alex Vesker
2023-02-01  7:28   ` [v2 04/16] net/mlx5/hws: add capability query for gen wqe command Alex Vesker
2023-02-01  7:28   ` [v2 05/16] net/mlx5/hws: align RTC create command with PRM format Alex Vesker
2023-02-01  7:28   ` [v2 06/16] net/mlx5/hws: add send FW match STE using gen WQE Alex Vesker
2023-02-01  7:28   ` [v2 07/16] net/mlx5/hws: add send FW range STE WQE Alex Vesker
2023-02-01  7:28   ` [v2 08/16] net/mlx5/hws: move matcher size check to function Alex Vesker
2023-02-01  7:28   ` [v2 09/16] net/mlx5/hws: support range match Alex Vesker
2023-02-01  7:28   ` [v2 10/16] net/mlx5/hws: redesign definer create Alex Vesker
2023-02-01  7:28   ` [v2 11/16] net/mlx5/hws: support partial hash Alex Vesker
2023-02-01  7:28   ` [v2 12/16] net/mlx5/hws: add range definer creation support Alex Vesker
2023-02-01  7:28   ` [v2 13/16] net/mlx5/hws: add FW WQE rule creation logic Alex Vesker
2023-02-01  7:28   ` [v2 14/16] net/mlx5/hws: add debug dump support for range and hash Alex Vesker
2023-02-01  7:28   ` [v2 15/16] net/mlx5/hws: rename pattern cache object Alex Vesker
2023-02-01  7:28   ` [v2 16/16] net/mlx5/hws: cache definer for reuse Alex Vesker
2023-02-06 15:07   ` [v2 00/16] net/mlx5/hws: support range and partial hash matching Matan Azrad
2023-02-13  8:27   ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230131093346.1261066-8-valex@nvidia.com \
    --to=valex@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).