patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Ophir Munk <ophirmu@nvidia.com>
To: dev@dpdk.org, Raslan Darawsheh <rasland@nvidia.com>
Cc: Ophir Munk <ophirmu@nvidia.com>, Matan Azrad <matan@nvidia.com>,
	Tal Shnaiderman <talshn@nvidia.com>,
	Thomas Monjalon <thomas@monjalon.net>,
	stable@dpdk.org
Subject: [dpdk-stable] [PATCH v1 02/72] net/mlx5: fix flow sample definitions
Date: Tue, 27 Oct 2020 23:22:25 +0000	[thread overview]
Message-ID: <20201027232335.31427-3-ophirmu@nvidia.com> (raw)
In-Reply-To: <20201027232335.31427-1-ophirmu@nvidia.com>

Flow sampling is dependent on rdma-core support. The definitions which
enable sampling code are HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE and
HAVE_MLX5_DR_CREATE_ACTION_DEST_ARRAY. This commit expands these
definitions to more functions which use sampling logic and structs:
flow_dv_sample_resource_register, flow_dv_dest_array_resource_register,
flow_dv_sample_resource_release, flow_dv_dest_array_resource_release.
Hence any system without the required rdma-core support will not compile
or execute redundant sampling code.

Fixes: eb7368b0109a ("net/mlx5: update translate function for sample action")
Fixes: e8a1d23ae9a8 ("net/mlx5: update translate function for mirror")
Cc: stable@dpdk.org

Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
---
 drivers/net/mlx5/mlx5_flow_dv.c | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index dafe07f..2560559 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8629,6 +8629,7 @@ flow_dv_sample_resource_register(struct rte_eth_dev *dev,
 			 void **sample_dv_actions,
 			 struct rte_flow_error *error)
 {
+#ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE
 	struct mlx5_flow_dv_sample_resource *cache_resource;
 	struct mlx5dv_dr_flow_sampler_attr sampler_attr;
 	struct mlx5_priv *priv = dev->data->dev_private;
@@ -8746,6 +8747,17 @@ flow_dv_sample_resource_register(struct rte_eth_dev *dev,
 				dev_flow->handle->dvh.rix_sample);
 	dev_flow->handle->dvh.rix_sample = 0;
 	return -rte_errno;
+#else
+	RTE_SET_USED(dev);
+	RTE_SET_USED(attr);
+	RTE_SET_USED(resource);
+	RTE_SET_USED(dev_flow);
+	RTE_SET_USED(sample_dv_actions);
+	RTE_SET_USED(error);
+	DRV_LOG(ERR, "Sample resource registration is not supported.");
+	rte_errno = ENOTSUP;
+	return -ENOTSUP;
+#endif /* HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE */
 }
 
 /**
@@ -8772,6 +8784,7 @@ flow_dv_dest_array_resource_register(struct rte_eth_dev *dev,
 			 struct mlx5_flow *dev_flow,
 			 struct rte_flow_error *error)
 {
+#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEST_ARRAY
 	struct mlx5_flow_dv_dest_array_resource *cache_resource;
 	struct mlx5dv_dr_action_dest_attr *dest_attr[MLX5_MAX_DEST_NUM] = { 0 };
 	struct mlx5dv_dr_action_dest_reformat dest_reformat[MLX5_MAX_DEST_NUM];
@@ -8894,6 +8907,16 @@ flow_dv_dest_array_resource_register(struct rte_eth_dev *dev,
 				dev_flow->handle->dvh.rix_dest_array);
 	dev_flow->handle->dvh.rix_dest_array = 0;
 	return -rte_errno;
+#else
+	RTE_SET_USED(dev);
+	RTE_SET_USED(attr);
+	RTE_SET_USED(resource);
+	RTE_SET_USED(dev_flow);
+	RTE_SET_USED(error);
+	DRV_LOG(ERR, "Dest array resource registration is not supported.");
+	rte_errno = ENOTSUP;
+	return -ENOTSUP;
+#endif /* HAVE_MLX5_DR_CREATE_ACTION_DEST_ARRAY */
 }
 
 /**
@@ -10758,6 +10781,7 @@ static int
 flow_dv_sample_resource_release(struct rte_eth_dev *dev,
 				     struct mlx5_flow_handle *handle)
 {
+#ifdef HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE
 	struct mlx5_priv *priv = dev->data->dev_private;
 	uint32_t idx = handle->dvh.rix_sample;
 	struct mlx5_flow_dv_sample_resource *cache_resource;
@@ -10807,6 +10831,12 @@ flow_dv_sample_resource_release(struct rte_eth_dev *dev,
 		return 0;
 	}
 	return 1;
+#else
+	RTE_SET_USED(dev);
+	RTE_SET_USED(handle);
+	DRV_LOG(ERR, "Sample resource release is not supported.");
+	return 0;
+#endif /* HAVE_MLX5_DR_CREATE_ACTION_FLOW_SAMPLE */
 }
 
 /**
@@ -10824,6 +10854,7 @@ static int
 flow_dv_dest_array_resource_release(struct rte_eth_dev *dev,
 				     struct mlx5_flow_handle *handle)
 {
+#ifdef HAVE_MLX5_DR_CREATE_ACTION_DEST_ARRAY
 	struct mlx5_priv *priv = dev->data->dev_private;
 	struct mlx5_flow_dv_dest_array_resource *cache_resource;
 	struct mlx5_flow_sub_actions_idx *mdest_act_res;
@@ -10875,6 +10906,12 @@ flow_dv_dest_array_resource_release(struct rte_eth_dev *dev,
 		return 0;
 	}
 	return 1;
+#else
+	RTE_SET_USED(dev);
+	RTE_SET_USED(handle);
+	DRV_LOG(ERR, "Dest array resource release is not supported.");
+	return 0;
+#endif /* ifdef HAVE_MLX5_DR_CREATE_ACTION_DEST_ARRAY */
 }
 
 /**
-- 
2.8.4


  parent reply	other threads:[~2020-10-27 23:24 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20201027232335.31427-1-ophirmu@nvidia.com>
2020-10-27 23:22 ` [dpdk-stable] [PATCH v1 01/72] mlx5: fix relaxed ordering DevX flow Ophir Munk
     [not found]   ` <20201210150648.8784-1-talshn@nvidia.com>
2020-12-10 15:06     ` [dpdk-stable] [PATCH v2 01/33] net/mlx5: fix folding constant array error Tal Shnaiderman
     [not found]       ` <20201213102056.11380-1-talshn@nvidia.com>
2020-12-13 10:20         ` [dpdk-stable] [PATCH v3 01/32] " Tal Shnaiderman
     [not found]           ` <20201213205005.7300-1-talshn@nvidia.com>
2020-12-13 20:49             ` [dpdk-stable] [PATCH v4 " Tal Shnaiderman
     [not found]               ` <20201228095436.14996-1-talshn@nvidia.com>
2020-12-28  9:54                 ` [dpdk-stable] [PATCH v5 " Tal Shnaiderman
2020-12-28  9:54                 ` [dpdk-stable] [PATCH v5 04/32] net/mlx5: fix freeing packet pacing Tal Shnaiderman
2020-12-28  9:54                 ` [dpdk-stable] [PATCH v5 11/32] net/mlx5: fix adding destroy flow action wrapper Tal Shnaiderman
2020-12-13 20:49             ` [dpdk-stable] [PATCH v4 04/32] net/mlx5: fix freeing packet pacing Tal Shnaiderman
2020-12-13 20:49             ` [dpdk-stable] [PATCH v4 11/32] net/mlx5: fix adding destroy flow action wrapper Tal Shnaiderman
2020-12-13 10:20         ` [dpdk-stable] [PATCH v3 04/32] net/mlx5: fix freeing packet pacing Tal Shnaiderman
2020-12-13 10:20         ` [dpdk-stable] [PATCH v3 11/32] net/mlx5: fix adding destroy flow action wrapper Tal Shnaiderman
2020-12-10 15:06     ` [dpdk-stable] [PATCH v2 04/33] net/mlx5: fix freeing packet pacing Tal Shnaiderman
2020-12-10 15:06     ` [dpdk-stable] [PATCH v2 11/33] net/mlx5: fix adding destroy flow action wrapper Tal Shnaiderman
2020-10-27 23:22 ` Ophir Munk [this message]
2020-10-27 23:22 ` [dpdk-stable] [PATCH v1 06/72] net/mlx5: fix freeing packet pacing Ophir Munk
2020-10-27 23:23 ` [dpdk-stable] [PATCH v1 62/72] net/mlx5/linux: fix add OS dest_devx_tir action Ophir Munk
2020-10-27 23:23 ` [dpdk-stable] [PATCH v1 70/72] common/mlx5: fix Windows warnings on missing enum Ophir Munk
2020-10-27 23:23 ` [dpdk-stable] [PATCH v1 71/72] net/mlx5: fix Windows warnings on get_if_name Ophir Munk
2020-10-28  7:34   ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201027232335.31427-3-ophirmu@nvidia.com \
    --to=ophirmu@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=talshn@nvidia.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).