DPDK patches and discussions
 help / color / mirror / Atom feed
From: Ophir Munk <ophirmu@nvidia.com>
To: Dariusz Sosnowski <dsosnowski@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	Bing Zhao <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
	Suanming Mou <suanmingm@nvidia.com>,
	Matan Azrad <matan@nvidia.com>
Cc: <dev@dpdk.org>, Raslan Darawsheh <rasland@nvidia.com>
Subject: [PATCH V2 1/4] common/mlx5: support FDB unified capability query
Date: Wed, 26 Feb 2025 10:38:43 +0200	[thread overview]
Message-ID: <20250226083846.4023622-2-ophirmu@nvidia.com> (raw)
In-Reply-To: <20250226083846.4023622-1-ophirmu@nvidia.com>

This commit queries the FW for the new unified FDB mode and saves it in
mlx5 shared device as fdb_unified_en bit.

Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c | 3 +++
 drivers/common/mlx5/mlx5_devx_cmds.h | 1 +
 2 files changed, 4 insertions(+)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index bba00a9..f504b29 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1349,6 +1349,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
 		attr->max_header_modify_pattern_length = MLX5_GET(wqe_based_flow_table_cap,
 								  hcattr,
 								  max_header_modify_pattern_length);
+		attr->fdb_unified_en = MLX5_GET(wqe_based_flow_table_cap,
+						hcattr,
+						fdb_unified_en);
 	}
 	/* Query HCA attribute for ROCE. */
 	if (attr->roce) {
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 38548b4..8de4210 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -325,6 +325,7 @@ struct mlx5_hca_attr {
 	uint32_t cross_vhca:1;
 	uint32_t lag_rx_port_affinity:1;
 	uint32_t wqe_based_flow_table_sup:1;
+	uint32_t fdb_unified_en:1;
 	uint8_t max_header_modify_pattern_length;
 	uint64_t system_image_guid;
 	uint32_t log_max_conn_track_offload:5;
-- 
2.8.4


  reply	other threads:[~2025-02-26  8:39 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20250225120213.2968616-0-ophirmu@nvidia.com>
2025-02-26  8:38 ` [PATCH V2 0/4] mlx5 unified fdb Ophir Munk
2025-02-26  8:38   ` Ophir Munk [this message]
2025-02-26  9:12     ` [PATCH V2 1/4] common/mlx5: support FDB unified capability query Dariusz Sosnowski
2025-02-26  8:38   ` [PATCH V2 2/4] net/mlx5: support FDB unified domain Ophir Munk
2025-02-26  9:13     ` Dariusz Sosnowski
2025-02-26  8:38   ` [PATCH V2 3/4] net/mlx5: remove unneeded FDB flag on representor action Ophir Munk
2025-02-26  9:13     ` Dariusz Sosnowski
2025-02-26  8:38   ` [PATCH V2 4/4] net/mlx5/hws: allow different types in miss validation Ophir Munk
2025-02-26  9:13     ` Dariusz Sosnowski
2025-02-26 16:12   ` [PATCH V2 0/4] mlx5 unified fdb Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250226083846.4023622-2-ophirmu@nvidia.com \
    --to=ophirmu@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).