* [dpdk-dev] [PATCH 2/2] app/testpmd: fix tunnel offload private items location
2021-04-19 13:02 [dpdk-dev] [PATCH 1/2] net/mlx5: fix tunnel offload private items location Gregory Etelson
@ 2021-04-19 13:02 ` Gregory Etelson
2021-04-23 8:50 ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Gregory Etelson
` (2 subsequent siblings)
3 siblings, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-04-19 13:02 UTC (permalink / raw)
To: dev; +Cc: getelson, matan, rasland, stable, Viacheslav Ovsiienko, Xiaoyun Li
Flow rules used in tunnel offload model require application to query
PMD for private flow elements and explicitly add these elements to
flow rule.
Tunnel offload model does not restrict private elements location in
a flow rule.
The patch places tunnel offload private PMD flow elements between
general RTE flow elements in a rule.
Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload")
Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
app/test-pmd/config.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ef0b9784d..da5e843fd 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1663,7 +1663,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
aptr->type != RTE_FLOW_ACTION_TYPE_END;
aptr++, num_actions++);
pft->actions = malloc(
- (num_actions + pft->num_pmd_actions) *
+ (num_actions + pft->num_pmd_actions + 1) *
sizeof(actions[0]));
if (!pft->actions) {
rte_flow_tunnel_action_decap_release(
@@ -1671,9 +1671,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
pft->num_pmd_actions, &error);
return NULL;
}
- rte_memcpy(pft->actions, pft->pmd_actions,
+ pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID;
+ rte_memcpy(pft->actions + 1, pft->pmd_actions,
pft->num_pmd_actions * sizeof(actions[0]));
- rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
+ rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions,
num_actions * sizeof(actions[0]));
}
if (tunnel_ops->items) {
@@ -1691,7 +1692,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
for (iptr = pattern, num_items = 1;
iptr->type != RTE_FLOW_ITEM_TYPE_END;
iptr++, num_items++);
- pft->items = malloc((num_items + pft->num_pmd_items) *
+ pft->items = malloc((num_items + pft->num_pmd_items + 1) *
sizeof(pattern[0]));
if (!pft->items) {
rte_flow_tunnel_item_release(
@@ -1699,9 +1700,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
pft->num_pmd_items, &error);
return NULL;
}
- rte_memcpy(pft->items, pft->pmd_items,
+ pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID;
+ rte_memcpy(pft->items + 1, pft->pmd_items,
pft->num_pmd_items * sizeof(pattern[0]));
- rte_memcpy(pft->items + pft->num_pmd_items, pattern,
+ rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern,
num_items * sizeof(pattern[0]));
}
--
2.25.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH 2/2] app/testpmd: fix tunnel offload private items location
2021-04-19 13:02 ` [dpdk-dev] [PATCH 2/2] app/testpmd: " Gregory Etelson
@ 2021-04-23 8:50 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2021-04-23 8:50 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: matan, rasland, stable, Viacheslav Ovsiienko, Xiaoyun Li
On 4/19/2021 2:02 PM, Gregory Etelson wrote:
> Flow rules used in tunnel offload model require application to query
> PMD for private flow elements and explicitly add these elements to
> flow rule.
Hi Gregory,
What is "private flow element"?
And can you please detail what is fixed with this patch, what was not working
and now working?
> Tunnel offload model does not restrict private elements location in
> a flow rule.
> The patch places tunnel offload private PMD flow elements between
> general RTE flow elements in a rule.
>
> Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload")
>
> Cc: stable@dpdk.org
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
> ---
> app/test-pmd/config.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index ef0b9784d..da5e843fd 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1663,7 +1663,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> aptr->type != RTE_FLOW_ACTION_TYPE_END;
> aptr++, num_actions++);
> pft->actions = malloc(
> - (num_actions + pft->num_pmd_actions) *
> + (num_actions + pft->num_pmd_actions + 1) *
> sizeof(actions[0]));
> if (!pft->actions) {
> rte_flow_tunnel_action_decap_release(
> @@ -1671,9 +1671,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> pft->num_pmd_actions, &error);
> return NULL;
> }
> - rte_memcpy(pft->actions, pft->pmd_actions,
> + pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID;
> + rte_memcpy(pft->actions + 1, pft->pmd_actions,
> pft->num_pmd_actions * sizeof(actions[0]));
> - rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
> + rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions,
> num_actions * sizeof(actions[0]));
> }
> if (tunnel_ops->items) {
> @@ -1691,7 +1692,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> for (iptr = pattern, num_items = 1;
> iptr->type != RTE_FLOW_ITEM_TYPE_END;
> iptr++, num_items++);
> - pft->items = malloc((num_items + pft->num_pmd_items) *
> + pft->items = malloc((num_items + pft->num_pmd_items + 1) *
> sizeof(pattern[0]));
> if (!pft->items) {
> rte_flow_tunnel_item_release(
> @@ -1699,9 +1700,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> pft->num_pmd_items, &error);
> return NULL;
> }
> - rte_memcpy(pft->items, pft->pmd_items,
> + pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID;
> + rte_memcpy(pft->items + 1, pft->pmd_items,
> pft->num_pmd_items * sizeof(pattern[0]));
> - rte_memcpy(pft->items + pft->num_pmd_items, pattern,
> + rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern,
> num_items * sizeof(pattern[0]));
> }
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-19 13:02 [dpdk-dev] [PATCH 1/2] net/mlx5: fix tunnel offload private items location Gregory Etelson
2021-04-19 13:02 ` [dpdk-dev] [PATCH 2/2] app/testpmd: " Gregory Etelson
@ 2021-04-25 15:57 ` Gregory Etelson
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: " Gregory Etelson
2021-04-25 16:28 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Thomas Monjalon
2021-05-02 8:08 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
2021-05-06 9:57 ` [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location Gregory Etelson
3 siblings, 2 replies; 27+ messages in thread
From: Gregory Etelson @ 2021-04-25 15:57 UTC (permalink / raw)
To: dev
Cc: getelson, matan, rasland, ferruh.yigit, stable,
Viacheslav Ovsiienko, Shahaf Shuler
Tunnel offload API requires application to query PMD for specific flow
items and actions. Application uses these PMD specific elements to
build flow rules according to the tunnel offload model.
The model does not restrict private elements location in a flow rule,
but the current MLX5 PMD implementation expects that tunnel offload
rule will begin with PMD specific elements.
The patch removes that placement limitation in MLX5 PMD.
Cc: stable@dpdk.org
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++++++++---------
drivers/net/mlx5/mlx5_flow.h | 44 ++++++++++++++-----------
drivers/net/mlx5/mlx5_flow_dv.c | 58 +++++++++++++++++++--------------
3 files changed, 90 insertions(+), 60 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 84463074a5..fcc82ce9d4 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -51,6 +51,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_action *app_actions,
uint32_t flow_idx,
+ const struct mlx5_flow_tunnel *tunnel,
struct tunnel_default_miss_ctx *ctx,
struct rte_flow_error *error);
static struct mlx5_flow_tunnel *
@@ -5463,22 +5464,14 @@ flow_create_split_outer(struct rte_eth_dev *dev,
return ret;
}
-static struct mlx5_flow_tunnel *
-flow_tunnel_from_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[])
+static inline struct mlx5_flow_tunnel *
+flow_tunnel_from_rule(const struct mlx5_flow *flow)
{
struct mlx5_flow_tunnel *tunnel;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
- if (is_flow_tunnel_match_rule(dev, attr, items, actions))
- tunnel = (struct mlx5_flow_tunnel *)items[0].spec;
- else if (is_flow_tunnel_steer_rule(dev, attr, items, actions))
- tunnel = (struct mlx5_flow_tunnel *)actions[0].conf;
- else
- tunnel = NULL;
+ tunnel = (typeof(tunnel))flow->tunnel;
#pragma GCC diagnostic pop
return tunnel;
@@ -5672,12 +5665,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
error);
if (ret < 0)
goto error;
- if (is_flow_tunnel_steer_rule(dev, attr,
- buf->entry[i].pattern,
- p_actions_rx)) {
+ if (is_flow_tunnel_steer_rule(wks->flows[0].tof_type)) {
ret = flow_tunnel_add_default_miss(dev, flow, attr,
p_actions_rx,
idx,
+ wks->flows[0].tunnel,
&default_miss_ctx,
error);
if (ret < 0) {
@@ -5741,7 +5733,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
}
flow_rxq_flags_set(dev, flow);
rte_free(translated_actions);
- tunnel = flow_tunnel_from_rule(dev, attr, items, actions);
+ tunnel = flow_tunnel_from_rule(wks->flows);
if (tunnel) {
flow->tunnel = 1;
flow->tunnel_id = tunnel->tunnel_id;
@@ -7459,6 +7451,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains)
return ret;
}
+const struct mlx5_flow_tunnel *
+mlx5_get_tof(const struct rte_flow_item *item,
+ const struct rte_flow_action *action,
+ enum mlx5_tof_rule_type *rule_type)
+{
+ for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+ if (item->type == (typeof(item->type))
+ MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ *rule_type = MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
+ return flow_items_to_tunnel(item);
+ }
+ }
+ for (; action->conf != RTE_FLOW_ACTION_TYPE_END; action++) {
+ if (action->type == (typeof(action->type))
+ MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) {
+ *rule_type = MLX5_TUNNEL_OFFLOAD_SET_RULE;
+ return flow_actions_to_tunnel(action);
+ }
+ }
+ return NULL;
+}
+
/**
* tunnel offload functionalilty is defined for DV environment only
*/
@@ -7489,13 +7503,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_action *app_actions,
uint32_t flow_idx,
+ const struct mlx5_flow_tunnel *tunnel,
struct tunnel_default_miss_ctx *ctx,
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flow *dev_flow;
struct rte_flow_attr miss_attr = *attr;
- const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf;
const struct rte_flow_item miss_items[2] = {
{
.type = RTE_FLOW_ITEM_TYPE_ETH,
@@ -7581,6 +7595,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
dev_flow->flow = flow;
dev_flow->external = true;
dev_flow->tunnel = tunnel;
+ dev_flow->tof_type = MLX5_TUNNEL_OFFLOAD_MISS_RULE;
/* Subflow object was created, we must include one in the list. */
SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx,
dev_flow->handle, next);
@@ -8192,6 +8207,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev,
__rte_unused const struct rte_flow_attr *attr,
__rte_unused const struct rte_flow_action *actions,
__rte_unused uint32_t flow_idx,
+ __rte_unused const struct mlx5_flow_tunnel *tunnel,
__rte_unused struct tunnel_default_miss_ctx *ctx,
__rte_unused struct rte_flow_error *error)
{
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index ec673c29ab..61f40adc25 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -783,6 +783,16 @@ struct mlx5_flow_verbs_workspace {
/** Maximal number of device sub-flows supported. */
#define MLX5_NUM_MAX_DEV_FLOWS 32
+/**
+ * tunnel offload rules type
+ */
+enum mlx5_tof_rule_type {
+ MLX5_TUNNEL_OFFLOAD_NONE = 0,
+ MLX5_TUNNEL_OFFLOAD_SET_RULE,
+ MLX5_TUNNEL_OFFLOAD_MATCH_RULE,
+ MLX5_TUNNEL_OFFLOAD_MISS_RULE,
+};
+
/** Device flow structure. */
__extension__
struct mlx5_flow {
@@ -818,6 +828,7 @@ struct mlx5_flow {
struct mlx5_flow_handle *handle;
uint32_t handle_idx; /* Index of the mlx5 flow handle memory. */
const struct mlx5_flow_tunnel *tunnel;
+ enum mlx5_tof_rule_type tof_type;
};
/* Flow meter state. */
@@ -1029,10 +1040,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev)
}
static inline bool
-is_tunnel_offload_active(struct rte_eth_dev *dev)
+is_tunnel_offload_active(const struct rte_eth_dev *dev)
{
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- struct mlx5_priv *priv = dev->data->dev_private;
+ const struct mlx5_priv *priv = dev->data->dev_private;
return !!priv->config.dv_miss_info;
#else
RTE_SET_USED(dev);
@@ -1041,23 +1052,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev)
}
static inline bool
-is_flow_tunnel_match_rule(__rte_unused struct rte_eth_dev *dev,
- __rte_unused const struct rte_flow_attr *attr,
- __rte_unused const struct rte_flow_item items[],
- __rte_unused const struct rte_flow_action actions[])
+is_flow_tunnel_match_rule(enum mlx5_tof_rule_type tof_rule_type)
{
- return (items[0].type == (typeof(items[0].type))
- MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL);
+ return tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
}
static inline bool
-is_flow_tunnel_steer_rule(__rte_unused struct rte_eth_dev *dev,
- __rte_unused const struct rte_flow_attr *attr,
- __rte_unused const struct rte_flow_item items[],
- __rte_unused const struct rte_flow_action actions[])
+is_flow_tunnel_steer_rule(enum mlx5_tof_rule_type tof_rule_type)
{
- return (actions[0].type == (typeof(actions[0].type))
- MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET);
+ return tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE;
}
static inline const struct mlx5_flow_tunnel *
@@ -1299,11 +1302,10 @@ struct flow_grp_info {
static inline bool
tunnel_use_standard_attr_group_translate
- (struct rte_eth_dev *dev,
- const struct mlx5_flow_tunnel *tunnel,
+ (const struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[])
+ const struct mlx5_flow_tunnel *tunnel,
+ enum mlx5_tof_rule_type tof_rule_type)
{
bool verdict;
@@ -1319,7 +1321,7 @@ tunnel_use_standard_attr_group_translate
* method
*/
verdict = !attr->group &&
- is_flow_tunnel_steer_rule(dev, attr, items, actions);
+ is_flow_tunnel_steer_rule(tof_rule_type);
} else {
/*
* non-tunnel group translation uses standard method for
@@ -1580,6 +1582,10 @@ int mlx5_flow_os_init_workspace_once(void);
void *mlx5_flow_os_get_specific_workspace(void);
int mlx5_flow_os_set_specific_workspace(struct mlx5_flow_workspace *data);
void mlx5_flow_os_release_workspace(void);
+const struct mlx5_flow_tunnel *
+mlx5_get_tof(const struct rte_flow_item *items,
+ const struct rte_flow_action *actions,
+ enum mlx5_tof_rule_type *rule_type);
#endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index e65cc13bd6..3b16f75743 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6100,32 +6100,33 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
uint32_t rw_act_num = 0;
uint64_t is_root;
const struct mlx5_flow_tunnel *tunnel;
+ enum mlx5_tof_rule_type tof_rule_type;
struct flow_grp_info grp_info = {
.external = !!external,
.transfer = !!attr->transfer,
.fdb_def_rule = !!priv->fdb_def_rule,
+ .std_tbl_fix = true,
};
const struct rte_eth_hairpin_conf *conf;
if (items == NULL)
return -1;
- if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
- tunnel = flow_items_to_tunnel(items);
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
- MLX5_FLOW_ACTION_DECAP;
- } else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) {
- tunnel = flow_actions_to_tunnel(actions);
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
- } else {
- tunnel = NULL;
+ tunnel = is_tunnel_offload_active(dev) ?
+ mlx5_get_tof(items, actions, &tof_rule_type) : NULL;
+ if (tunnel) {
+ if (priv->representor)
+ return rte_flow_error_set
+ (error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "decap not supported for VF representor");
+ if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE)
+ action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
+ else if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE)
+ action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
+ MLX5_FLOW_ACTION_DECAP;
+ grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
+ (dev, attr, tunnel, tof_rule_type);
}
- if (tunnel && priv->representor)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "decap not supported "
- "for VF representor");
- grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
- (dev, tunnel, attr, items, actions);
ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info, error);
if (ret < 0)
return ret;
@@ -10909,13 +10910,14 @@ flow_dv_translate(struct rte_eth_dev *dev,
int tmp_actions_n = 0;
uint32_t table;
int ret = 0;
- const struct mlx5_flow_tunnel *tunnel;
+ const struct mlx5_flow_tunnel *tunnel = NULL;
struct flow_grp_info grp_info = {
.external = !!dev_flow->external,
.transfer = !!attr->transfer,
.fdb_def_rule = !!priv->fdb_def_rule,
.skip_scale = dev_flow->skip_scale &
(1 << MLX5_SCALE_FLOW_GROUP_BIT),
+ .std_tbl_fix = true,
};
if (!wks)
@@ -10930,15 +10932,21 @@ flow_dv_translate(struct rte_eth_dev *dev,
MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
/* update normal path action resource into last index of array */
sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1];
- tunnel = is_flow_tunnel_match_rule(dev, attr, items, actions) ?
- flow_items_to_tunnel(items) :
- is_flow_tunnel_steer_rule(dev, attr, items, actions) ?
- flow_actions_to_tunnel(actions) :
- dev_flow->tunnel ? dev_flow->tunnel : NULL;
+ if (is_tunnel_offload_active(dev)) {
+ if (dev_flow->tunnel) {
+ RTE_VERIFY(dev_flow->tof_type ==
+ MLX5_TUNNEL_OFFLOAD_MISS_RULE);
+ tunnel = dev_flow->tunnel;
+ } else {
+ tunnel = mlx5_get_tof(items, actions,
+ &dev_flow->tof_type);
+ dev_flow->tunnel = tunnel;
+ }
+ grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
+ (dev, attr, tunnel, dev_flow->tof_type);
+ }
mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
- grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
- (dev, tunnel, attr, items, actions);
ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table,
&grp_info, error);
if (ret)
@@ -10948,7 +10956,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
/* number of actions must be set to 0 in case of dirty stack. */
mhdr_res->actions_num = 0;
- if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
+ if (is_flow_tunnel_match_rule(dev_flow->tof_type)) {
/*
* do not add decap action if match rule drops packet
* HW rejects rules with decap & drop
--
2.31.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix tunnel offload private items location
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Gregory Etelson
@ 2021-04-25 15:57 ` Gregory Etelson
2021-04-30 13:49 ` Ferruh Yigit
2021-04-25 16:28 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Thomas Monjalon
1 sibling, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-04-25 15:57 UTC (permalink / raw)
To: dev
Cc: getelson, matan, rasland, ferruh.yigit, stable,
Viacheslav Ovsiienko, Xiaoyun Li
Tunnel offload API requires application to query PMD for specific flow
items and actions. Application uses these PMD specific elements to
build flow rules according to the tunnel offload model.
The model does not restrict private elements location in a flow rule,
but the current MLX5 PMD implementation expected that tunnel offload
rule will begin with PMD specific elements.
The patch places tunnel offload private PMD flow elements between
general RTE flow elements in a rule.
Cc: stable@dpdk.org
Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
app/test-pmd/config.c | 14 ++++++++------
1 file changed, 8 insertions(+), 6 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 40b2b29725..1520b8193f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1664,7 +1664,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
aptr->type != RTE_FLOW_ACTION_TYPE_END;
aptr++, num_actions++);
pft->actions = malloc(
- (num_actions + pft->num_pmd_actions) *
+ (num_actions + pft->num_pmd_actions + 1) *
sizeof(actions[0]));
if (!pft->actions) {
rte_flow_tunnel_action_decap_release(
@@ -1672,9 +1672,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
pft->num_pmd_actions, &error);
return NULL;
}
- rte_memcpy(pft->actions, pft->pmd_actions,
+ pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID;
+ rte_memcpy(pft->actions + 1, pft->pmd_actions,
pft->num_pmd_actions * sizeof(actions[0]));
- rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
+ rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions,
num_actions * sizeof(actions[0]));
}
if (tunnel_ops->items) {
@@ -1692,7 +1693,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
for (iptr = pattern, num_items = 1;
iptr->type != RTE_FLOW_ITEM_TYPE_END;
iptr++, num_items++);
- pft->items = malloc((num_items + pft->num_pmd_items) *
+ pft->items = malloc((num_items + pft->num_pmd_items + 1) *
sizeof(pattern[0]));
if (!pft->items) {
rte_flow_tunnel_item_release(
@@ -1700,9 +1701,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
pft->num_pmd_items, &error);
return NULL;
}
- rte_memcpy(pft->items, pft->pmd_items,
+ pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID;
+ rte_memcpy(pft->items + 1, pft->pmd_items,
pft->num_pmd_items * sizeof(pattern[0]));
- rte_memcpy(pft->items + pft->num_pmd_items, pattern,
+ rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern,
num_items * sizeof(pattern[0]));
}
--
2.31.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/2] app/testpmd: fix tunnel offload private items location
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: " Gregory Etelson
@ 2021-04-30 13:49 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2021-04-30 13:49 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: matan, rasland, stable, Viacheslav Ovsiienko, Xiaoyun Li
On 4/25/2021 4:57 PM, Gregory Etelson wrote:
> Tunnel offload API requires application to query PMD for specific flow
> items and actions. Application uses these PMD specific elements to
> build flow rules according to the tunnel offload model.
Can you please give some samples what are "PMD specific elements" required to be
queried by application? To understand issue better.
> The model does not restrict private elements location in a flow rule,
> but the current MLX5 PMD implementation expected that tunnel offload
> rule will begin with PMD specific elements.
Why we need to refer the mlx5 pmd implementation in the testpmd patch? Is this
patch trying to align testpmd to the mlx5 implementation?
> The patch places tunnel offload private PMD flow elements between
> general RTE flow elements in a rule.
>
Why?
Overall what was the problem, what was failing and what was its impact?
And how changing the private elements location in the flow rule solving the issue?
> Cc: stable@dpdk.org
> Fixes: 1b9f274623b8 ("app/testpmd: add commands for tunnel offload")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> app/test-pmd/config.c | 14 ++++++++------
> 1 file changed, 8 insertions(+), 6 deletions(-)
>
> diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
> index 40b2b29725..1520b8193f 100644
> --- a/app/test-pmd/config.c
> +++ b/app/test-pmd/config.c
> @@ -1664,7 +1664,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> aptr->type != RTE_FLOW_ACTION_TYPE_END;
> aptr++, num_actions++);
> pft->actions = malloc(
> - (num_actions + pft->num_pmd_actions) *
> + (num_actions + pft->num_pmd_actions + 1) *
> sizeof(actions[0]));
> if (!pft->actions) {
> rte_flow_tunnel_action_decap_release(
> @@ -1672,9 +1672,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> pft->num_pmd_actions, &error);
> return NULL;
> }
> - rte_memcpy(pft->actions, pft->pmd_actions,
> + pft->actions[0].type = RTE_FLOW_ACTION_TYPE_VOID;
> + rte_memcpy(pft->actions + 1, pft->pmd_actions,
> pft->num_pmd_actions * sizeof(actions[0]));
> - rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
> + rte_memcpy(pft->actions + pft->num_pmd_actions + 1, actions,
> num_actions * sizeof(actions[0]));
> }
> if (tunnel_ops->items) {
> @@ -1692,7 +1693,7 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> for (iptr = pattern, num_items = 1;
> iptr->type != RTE_FLOW_ITEM_TYPE_END;
> iptr++, num_items++);
> - pft->items = malloc((num_items + pft->num_pmd_items) *
> + pft->items = malloc((num_items + pft->num_pmd_items + 1) *
> sizeof(pattern[0]));
> if (!pft->items) {
> rte_flow_tunnel_item_release(
> @@ -1700,9 +1701,10 @@ port_flow_tunnel_offload_cmd_prep(portid_t port_id,
> pft->num_pmd_items, &error);
> return NULL;
> }
> - rte_memcpy(pft->items, pft->pmd_items,
> + pft->items[0].type = RTE_FLOW_ITEM_TYPE_VOID;
> + rte_memcpy(pft->items + 1, pft->pmd_items,
> pft->num_pmd_items * sizeof(pattern[0]));
> - rte_memcpy(pft->items + pft->num_pmd_items, pattern,
> + rte_memcpy(pft->items + pft->num_pmd_items + 1, pattern,
> num_items * sizeof(pattern[0]));
> }
>
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Gregory Etelson
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 2/2] app/testpmd: " Gregory Etelson
@ 2021-04-25 16:28 ` Thomas Monjalon
2021-04-25 17:01 ` Gregory Etelson
1 sibling, 1 reply; 27+ messages in thread
From: Thomas Monjalon @ 2021-04-25 16:28 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: matan, rasland, Ferruh Yigit, stable, Viacheslav Ovsiienko,
Shahaf Shuler, Asaf Penso
Dim 25 avr 2021, à 17:57, Gregory Etelson a écrit :
> Tunnel offload API requires application to query PMD for specific flow
> items and actions. Application uses these PMD specific elements to
> build flow rules according to the tunnel offload model.
> The model does not restrict private elements location in a flow rule,
> but the current MLX5 PMD implementation expects that tunnel offload
> rule will begin with PMD specific elements.
> The patch removes that placement limitation in MLX5 PMD.
>
> Cc: stable@dpdk.org
>
> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Cc: stable must be just after the Fixes line.
There is a testpmd patch in the same series, is it OK to be merged in mlx tree?
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-25 16:28 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Thomas Monjalon
@ 2021-04-25 17:01 ` Gregory Etelson
2021-04-25 17:07 ` Thomas Monjalon
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-04-25 17:01 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon, dev
Cc: Matan Azrad, Raslan Darawsheh, Ferruh Yigit, stable,
Slava Ovsiienko, Shahaf Shuler, Asaf Penso
Hello Thomas,
> Dim 25 avr 2021, à 17:57, Gregory Etelson a écrit :
> > Tunnel offload API requires application to query PMD for specific flow
> > items and actions. Application uses these PMD specific elements to
> > build flow rules according to the tunnel offload model.
> > The model does not restrict private elements location in a flow rule,
> > but the current MLX5 PMD implementation expects that tunnel offload
> > rule will begin with PMD specific elements.
> > The patch removes that placement limitation in MLX5 PMD.
> >
> > Cc: stable@dpdk.org
> >
> > Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>
> Cc: stable must be just after the Fixes line.
>
> There is a testpmd patch in the same series, is it OK to be merged in mlx
> tree?
The tunnel offload model can be complicated.
The testpmd patch that comes with this one emphasizes how application
can construct a flow rule without placement restrictions.
I think that both patches should be merged.
Regards,
Gregory
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-25 17:01 ` Gregory Etelson
@ 2021-04-25 17:07 ` Thomas Monjalon
2021-04-26 7:54 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Thomas Monjalon @ 2021-04-25 17:07 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: Matan Azrad, Raslan Darawsheh, Ferruh Yigit, stable,
Viacheslav Ovsiienko, Shahaf Shuler, Asaf Penso
Dim 25 avr 2021, à 19:01, Gregory Etelson a écrit :
> Hello Thomas,
>
> > Dim 25 avr 2021, à 17:57, Gregory Etelson a écrit :
> > > Tunnel offload API requires application to query PMD for specific flow
> > > items and actions. Application uses these PMD specific elements to
> > > build flow rules according to the tunnel offload model.
> > > The model does not restrict private elements location in a flow rule,
> > > but the current MLX5 PMD implementation expects that tunnel offload
> > > rule will begin with PMD specific elements.
> > > The patch removes that placement limitation in MLX5 PMD.
> > >
> > > Cc: stable@dpdk.org
> > >
> > > Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
> >
> > Cc: stable must be just after the Fixes line.
> >
> > There is a testpmd patch in the same series, is it OK to be merged in mlx
> > tree?
>
> The tunnel offload model can be complicated.
> The testpmd patch that comes with this one emphasizes how application
> can construct a flow rule without placement restrictions.
> I think that both patches should be merged.
That's not the question.
One patch should be merged in mlx tree, while the other one should target next-net.
In such a situation, quite often we split in different series.
For this case, it's up to Raslan and Ferruh to agree on how to proceed.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-25 17:07 ` Thomas Monjalon
@ 2021-04-26 7:54 ` Ferruh Yigit
2021-04-27 9:27 ` Gregory Etelson
2021-05-04 9:20 ` Gregory Etelson
0 siblings, 2 replies; 27+ messages in thread
From: Ferruh Yigit @ 2021-04-26 7:54 UTC (permalink / raw)
To: Thomas Monjalon, Gregory Etelson, dev
Cc: Matan Azrad, Raslan Darawsheh, stable, Viacheslav Ovsiienko,
Shahaf Shuler, Asaf Penso
On 4/25/2021 6:07 PM, Thomas Monjalon wrote:
> Dim 25 avr 2021, à 19:01, Gregory Etelson a écrit :
>> Hello Thomas,
>>
>>> Dim 25 avr 2021, à 17:57, Gregory Etelson a écrit :
>>>> Tunnel offload API requires application to query PMD for specific flow
>>>> items and actions. Application uses these PMD specific elements to
>>>> build flow rules according to the tunnel offload model.
>>>> The model does not restrict private elements location in a flow rule,
>>>> but the current MLX5 PMD implementation expects that tunnel offload
>>>> rule will begin with PMD specific elements.
>>>> The patch removes that placement limitation in MLX5 PMD.
>>>>
>>>> Cc: stable@dpdk.org
>>>>
>>>> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>>>
>>> Cc: stable must be just after the Fixes line.
>>>
>>> There is a testpmd patch in the same series, is it OK to be merged in mlx
>>> tree?
>>
>> The tunnel offload model can be complicated.
>> The testpmd patch that comes with this one emphasizes how application
>> can construct a flow rule without placement restrictions.
>> I think that both patches should be merged.
>
> That's not the question.
> One patch should be merged in mlx tree, while the other one should target next-net.
> In such a situation, quite often we split in different series.
> For this case, it's up to Raslan and Ferruh to agree on how to proceed.
>
I am OK to get both to next-net, as long as driver patch is Ack'ed.
It seems there is a relation between driver and testpmd patch, but I am trying
to figure out how tightly they are coupled, which may be sign of something wrong.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-26 7:54 ` Ferruh Yigit
@ 2021-04-27 9:27 ` Gregory Etelson
2021-04-29 7:44 ` Gregory Etelson
2021-05-04 9:20 ` Gregory Etelson
1 sibling, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-04-27 9:27 UTC (permalink / raw)
To: Ferruh Yigit, NBU-Contact-Thomas Monjalon, dev
Cc: Matan Azrad, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler, Asaf Penso
Hello Ferruh,
[snip]
> I am OK to get both to next-net, as long as driver patch is Ack'ed.
>
[PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
was sent with Ack from Viacheslav Ovsiienko
> It seems there is a relation between driver and testpmd patch, but I am
> trying to figure out how tightly they are coupled, which may be sign of
> something wrong.
The testpmd patch shows how application can construct flow rules for the
tunnel offload model without items & actions locations restrictions.
Regards,
Gregory
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-27 9:27 ` Gregory Etelson
@ 2021-04-29 7:44 ` Gregory Etelson
2021-04-30 13:37 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-04-29 7:44 UTC (permalink / raw)
To: Ferruh Yigit, NBU-Contact-Thomas Monjalon, dev
Cc: Matan Azrad, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler, Asaf Penso
Hello Ferruh,
> [snip]
>
> > I am OK to get both to next-net, as long as driver patch is Ack'ed.
> >
>
> [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location was sent
> with Ack from Viacheslav Ovsiienko
>
> > It seems there is a relation between driver and testpmd patch, but I
> > am trying to figure out how tightly they are coupled, which may be
> > sign of something wrong.
>
> The testpmd patch shows how application can construct flow rules for the
> tunnel offload model without items & actions locations restrictions.
There was no progress with this patch.
Is it scheduled for 21.05 ?
Regards,
Gregory
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-29 7:44 ` Gregory Etelson
@ 2021-04-30 13:37 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2021-04-30 13:37 UTC (permalink / raw)
To: Gregory Etelson, NBU-Contact-Thomas Monjalon, dev
Cc: Matan Azrad, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler, Asaf Penso
On 4/29/2021 8:44 AM, Gregory Etelson wrote:
> Hello Ferruh,
>
>> [snip]
>>
>>> I am OK to get both to next-net, as long as driver patch is Ack'ed.
>>>
>>
>> [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location was sent
>> with Ack from Viacheslav Ovsiienko
>>
>>> It seems there is a relation between driver and testpmd patch, but I
>>> am trying to figure out how tightly they are coupled, which may be
>>> sign of something wrong.
>>
>> The testpmd patch shows how application can construct flow rules for the
>> tunnel offload model without items & actions locations restrictions.
>
> There was no progress with this patch.
> Is it scheduled for 21.05 ?
>
There were some questions on the v1 of the patch not answered, let me put them
back again to v2.
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix tunnel offload private items location
2021-04-26 7:54 ` Ferruh Yigit
2021-04-27 9:27 ` Gregory Etelson
@ 2021-05-04 9:20 ` Gregory Etelson
1 sibling, 0 replies; 27+ messages in thread
From: Gregory Etelson @ 2021-05-04 9:20 UTC (permalink / raw)
To: Ferruh Yigit, NBU-Contact-Thomas Monjalon, dev
Cc: Matan Azrad, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler, Asaf Penso
Hello Ferruh,
[:snip:]
> I am OK to get both to next-net, as long as driver patch is Ack'ed.
>
> It seems there is a relation between driver and testpmd patch, but I am
> trying to figure out how tightly they are coupled, which may be sign of
> something wrong.
Mlx5 PMD and testpmd patches are not coupled.
The PMD patch fixes a bug in tunnel offload implementation and
the testpmd patch emphasizes general usage of the tunnel offload API.
The testpmd patch is not at fix - the current code works fine.
I'll remove the testpmd patch in v3.
Regards,
Gregory
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-04-19 13:02 [dpdk-dev] [PATCH 1/2] net/mlx5: fix tunnel offload private items location Gregory Etelson
2021-04-19 13:02 ` [dpdk-dev] [PATCH 2/2] app/testpmd: " Gregory Etelson
2021-04-25 15:57 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: " Gregory Etelson
@ 2021-05-02 8:08 ` Gregory Etelson
2021-05-04 8:10 ` Raslan Darawsheh
2021-05-04 8:20 ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
2021-05-06 9:57 ` [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location Gregory Etelson
3 siblings, 2 replies; 27+ messages in thread
From: Gregory Etelson @ 2021-05-02 8:08 UTC (permalink / raw)
To: dev
Cc: getelson, matan, orika, rasland, stable, Viacheslav Ovsiienko,
Shahaf Shuler
Tunnel offload API requires application to query PMD for specific flow
items and actions. Application uses these PMD specific elements to
build flow rules according to the tunnel offload model.
The model does not restrict private elements location in a flow rule,
but the current MLX5 PMD implementation expects that tunnel offload
rule will begin with PMD specific elements.
The patch removes that placement limitation.
Cc: stable@dpdk.org
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++++++++---------
drivers/net/mlx5/mlx5_flow.h | 46 +++++++++++++++-----------
drivers/net/mlx5/mlx5_flow_dv.c | 58 +++++++++++++++++++--------------
3 files changed, 92 insertions(+), 60 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 15ed5ec7a2..a7ceafe221 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -50,6 +50,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_action *app_actions,
uint32_t flow_idx,
+ const struct mlx5_flow_tunnel *tunnel,
struct tunnel_default_miss_ctx *ctx,
struct rte_flow_error *error);
static struct mlx5_flow_tunnel *
@@ -5910,22 +5911,14 @@ flow_create_split_outer(struct rte_eth_dev *dev,
return ret;
}
-static struct mlx5_flow_tunnel *
-flow_tunnel_from_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[])
+static inline struct mlx5_flow_tunnel *
+flow_tunnel_from_rule(const struct mlx5_flow *flow)
{
struct mlx5_flow_tunnel *tunnel;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
- if (is_flow_tunnel_match_rule(dev, attr, items, actions))
- tunnel = (struct mlx5_flow_tunnel *)items[0].spec;
- else if (is_flow_tunnel_steer_rule(dev, attr, items, actions))
- tunnel = (struct mlx5_flow_tunnel *)actions[0].conf;
- else
- tunnel = NULL;
+ tunnel = (typeof(tunnel))flow->tunnel;
#pragma GCC diagnostic pop
return tunnel;
@@ -6120,12 +6113,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
error);
if (ret < 0)
goto error;
- if (is_flow_tunnel_steer_rule(dev, attr,
- buf->entry[i].pattern,
- p_actions_rx)) {
+ if (is_flow_tunnel_steer_rule(wks->flows[0].tof_type)) {
ret = flow_tunnel_add_default_miss(dev, flow, attr,
p_actions_rx,
idx,
+ wks->flows[0].tunnel,
&default_miss_ctx,
error);
if (ret < 0) {
@@ -6189,7 +6181,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
}
flow_rxq_flags_set(dev, flow);
rte_free(translated_actions);
- tunnel = flow_tunnel_from_rule(dev, attr, items, actions);
+ tunnel = flow_tunnel_from_rule(wks->flows);
if (tunnel) {
flow->tunnel = 1;
flow->tunnel_id = tunnel->tunnel_id;
@@ -8104,6 +8096,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains)
return ret;
}
+const struct mlx5_flow_tunnel *
+mlx5_get_tof(const struct rte_flow_item *item,
+ const struct rte_flow_action *action,
+ enum mlx5_tof_rule_type *rule_type)
+{
+ for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+ if (item->type == (typeof(item->type))
+ MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ *rule_type = MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
+ return flow_items_to_tunnel(item);
+ }
+ }
+ for (; action->conf != RTE_FLOW_ACTION_TYPE_END; action++) {
+ if (action->type == (typeof(action->type))
+ MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) {
+ *rule_type = MLX5_TUNNEL_OFFLOAD_SET_RULE;
+ return flow_actions_to_tunnel(action);
+ }
+ }
+ return NULL;
+}
+
/**
* tunnel offload functionalilty is defined for DV environment only
*/
@@ -8134,13 +8148,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_action *app_actions,
uint32_t flow_idx,
+ const struct mlx5_flow_tunnel *tunnel,
struct tunnel_default_miss_ctx *ctx,
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flow *dev_flow;
struct rte_flow_attr miss_attr = *attr;
- const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf;
const struct rte_flow_item miss_items[2] = {
{
.type = RTE_FLOW_ITEM_TYPE_ETH,
@@ -8226,6 +8240,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
dev_flow->flow = flow;
dev_flow->external = true;
dev_flow->tunnel = tunnel;
+ dev_flow->tof_type = MLX5_TUNNEL_OFFLOAD_MISS_RULE;
/* Subflow object was created, we must include one in the list. */
SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx,
dev_flow->handle, next);
@@ -8839,6 +8854,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev,
__rte_unused const struct rte_flow_attr *attr,
__rte_unused const struct rte_flow_action *actions,
__rte_unused uint32_t flow_idx,
+ __rte_unused const struct mlx5_flow_tunnel *tunnel,
__rte_unused struct tunnel_default_miss_ctx *ctx,
__rte_unused struct rte_flow_error *error)
{
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 56908ae08b..cc3e79d088 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -783,6 +783,16 @@ struct mlx5_flow_verbs_workspace {
/** Maximal number of device sub-flows supported. */
#define MLX5_NUM_MAX_DEV_FLOWS 32
+/**
+ * tunnel offload rules type
+ */
+enum mlx5_tof_rule_type {
+ MLX5_TUNNEL_OFFLOAD_NONE = 0,
+ MLX5_TUNNEL_OFFLOAD_SET_RULE,
+ MLX5_TUNNEL_OFFLOAD_MATCH_RULE,
+ MLX5_TUNNEL_OFFLOAD_MISS_RULE,
+};
+
/** Device flow structure. */
__extension__
struct mlx5_flow {
@@ -818,6 +828,7 @@ struct mlx5_flow {
struct mlx5_flow_handle *handle;
uint32_t handle_idx; /* Index of the mlx5 flow handle memory. */
const struct mlx5_flow_tunnel *tunnel;
+ enum mlx5_tof_rule_type tof_type;
};
/* Flow meter state. */
@@ -911,10 +922,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev)
}
static inline bool
-is_tunnel_offload_active(struct rte_eth_dev *dev)
+is_tunnel_offload_active(const struct rte_eth_dev *dev)
{
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- struct mlx5_priv *priv = dev->data->dev_private;
+ const struct mlx5_priv *priv = dev->data->dev_private;
return !!priv->config.dv_miss_info;
#else
RTE_SET_USED(dev);
@@ -923,23 +934,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev)
}
static inline bool
-is_flow_tunnel_match_rule(__rte_unused struct rte_eth_dev *dev,
- __rte_unused const struct rte_flow_attr *attr,
- __rte_unused const struct rte_flow_item items[],
- __rte_unused const struct rte_flow_action actions[])
+is_flow_tunnel_match_rule(enum mlx5_tof_rule_type tof_rule_type)
{
- return (items[0].type == (typeof(items[0].type))
- MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL);
+ return tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
}
static inline bool
-is_flow_tunnel_steer_rule(__rte_unused struct rte_eth_dev *dev,
- __rte_unused const struct rte_flow_attr *attr,
- __rte_unused const struct rte_flow_item items[],
- __rte_unused const struct rte_flow_action actions[])
+is_flow_tunnel_steer_rule(enum mlx5_tof_rule_type tof_rule_type)
{
- return (actions[0].type == (typeof(actions[0].type))
- MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET);
+ return tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE;
}
static inline const struct mlx5_flow_tunnel *
@@ -1223,11 +1226,10 @@ struct flow_grp_info {
static inline bool
tunnel_use_standard_attr_group_translate
- (struct rte_eth_dev *dev,
- const struct mlx5_flow_tunnel *tunnel,
+ (const struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[])
+ const struct mlx5_flow_tunnel *tunnel,
+ enum mlx5_tof_rule_type tof_rule_type)
{
bool verdict;
@@ -1243,7 +1245,7 @@ tunnel_use_standard_attr_group_translate
* method
*/
verdict = !attr->group &&
- is_flow_tunnel_steer_rule(dev, attr, items, actions);
+ is_flow_tunnel_steer_rule(tof_rule_type);
} else {
/*
* non-tunnel group translation uses standard method for
@@ -1552,4 +1554,10 @@ int mlx5_flow_create_def_policy(struct rte_eth_dev *dev);
void mlx5_flow_destroy_def_policy(struct rte_eth_dev *dev);
void flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
struct mlx5_flow_handle *dev_handle);
+const struct mlx5_flow_tunnel *
+mlx5_get_tof(const struct rte_flow_item *items,
+ const struct rte_flow_action *actions,
+ enum mlx5_tof_rule_type *rule_type);
+
+
#endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index d810466242..1caca61577 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6315,33 +6315,34 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
uint32_t rw_act_num = 0;
uint64_t is_root;
const struct mlx5_flow_tunnel *tunnel;
+ enum mlx5_tof_rule_type tof_rule_type;
struct flow_grp_info grp_info = {
.external = !!external,
.transfer = !!attr->transfer,
.fdb_def_rule = !!priv->fdb_def_rule,
+ .std_tbl_fix = true,
};
const struct rte_eth_hairpin_conf *conf;
bool def_policy = false;
if (items == NULL)
return -1;
- if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
- tunnel = flow_items_to_tunnel(items);
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
- MLX5_FLOW_ACTION_DECAP;
- } else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) {
- tunnel = flow_actions_to_tunnel(actions);
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
- } else {
- tunnel = NULL;
+ tunnel = is_tunnel_offload_active(dev) ?
+ mlx5_get_tof(items, actions, &tof_rule_type) : NULL;
+ if (tunnel) {
+ if (priv->representor)
+ return rte_flow_error_set
+ (error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "decap not supported for VF representor");
+ if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE)
+ action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
+ else if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE)
+ action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
+ MLX5_FLOW_ACTION_DECAP;
+ grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
+ (dev, attr, tunnel, tof_rule_type);
}
- if (tunnel && priv->representor)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "decap not supported "
- "for VF representor");
- grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
- (dev, tunnel, attr, items, actions);
ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info, error);
if (ret < 0)
return ret;
@@ -11191,13 +11192,14 @@ flow_dv_translate(struct rte_eth_dev *dev,
int tmp_actions_n = 0;
uint32_t table;
int ret = 0;
- const struct mlx5_flow_tunnel *tunnel;
+ const struct mlx5_flow_tunnel *tunnel = NULL;
struct flow_grp_info grp_info = {
.external = !!dev_flow->external,
.transfer = !!attr->transfer,
.fdb_def_rule = !!priv->fdb_def_rule,
.skip_scale = dev_flow->skip_scale &
(1 << MLX5_SCALE_FLOW_GROUP_BIT),
+ .std_tbl_fix = true,
};
if (!wks)
@@ -11212,15 +11214,21 @@ flow_dv_translate(struct rte_eth_dev *dev,
MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
/* update normal path action resource into last index of array */
sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1];
- tunnel = is_flow_tunnel_match_rule(dev, attr, items, actions) ?
- flow_items_to_tunnel(items) :
- is_flow_tunnel_steer_rule(dev, attr, items, actions) ?
- flow_actions_to_tunnel(actions) :
- dev_flow->tunnel ? dev_flow->tunnel : NULL;
+ if (is_tunnel_offload_active(dev)) {
+ if (dev_flow->tunnel) {
+ RTE_VERIFY(dev_flow->tof_type ==
+ MLX5_TUNNEL_OFFLOAD_MISS_RULE);
+ tunnel = dev_flow->tunnel;
+ } else {
+ tunnel = mlx5_get_tof(items, actions,
+ &dev_flow->tof_type);
+ dev_flow->tunnel = tunnel;
+ }
+ grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
+ (dev, attr, tunnel, dev_flow->tof_type);
+ }
mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
- grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
- (dev, tunnel, attr, items, actions);
ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table,
&grp_info, error);
if (ret)
@@ -11230,7 +11238,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
/* number of actions must be set to 0 in case of dirty stack. */
mhdr_res->actions_num = 0;
- if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
+ if (is_flow_tunnel_match_rule(dev_flow->tof_type)) {
/*
* do not add decap action if match rule drops packet
* HW rejects rules with decap & drop
--
2.31.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-05-02 8:08 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
@ 2021-05-04 8:10 ` Raslan Darawsheh
2021-05-04 8:20 ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
1 sibling, 0 replies; 27+ messages in thread
From: Raslan Darawsheh @ 2021-05-04 8:10 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: Matan Azrad, Ori Kam, stable, Slava Ovsiienko, Shahaf Shuler
Hi,
> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Sunday, May 2, 2021 11:08 AM
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; stable@dpdk.org; Slava Ovsiienko
> <viacheslavo@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
> Subject: [PATCH v3] net/mlx5: fix tunnel offload private items location
>
> Tunnel offload API requires application to query PMD for specific flow items
> and actions. Application uses these PMD specific elements to build flow rules
> according to the tunnel offload model.
> The model does not restrict private elements location in a flow rule, but the
> current MLX5 PMD implementation expects that tunnel offload rule will
> begin with PMD specific elements.
> The patch removes that placement limitation.
>
> Cc: stable@dpdk.org
>
> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Patch applied to next-net-mlx,
Kindest regards
Raslan Darawsheh
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-05-02 8:08 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
2021-05-04 8:10 ` Raslan Darawsheh
@ 2021-05-04 8:20 ` Ferruh Yigit
2021-05-04 9:12 ` Gregory Etelson
1 sibling, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2021-05-04 8:20 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: matan, orika, rasland, stable, Viacheslav Ovsiienko, Shahaf Shuler
On 5/2/2021 9:08 AM, Gregory Etelson wrote:
> Tunnel offload API requires application to query PMD for specific flow
> items and actions. Application uses these PMD specific elements to
> build flow rules according to the tunnel offload model.
> The model does not restrict private elements location in a flow rule,
> but the current MLX5 PMD implementation expects that tunnel offload
> rule will begin with PMD specific elements.
> The patch removes that placement limitation.
>
> Cc: stable@dpdk.org
>
> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Hi Gregory,
The were some questions around testpmd part of this patch in previous version,
they are not answered and this version is dropping the testpmd part.
Is it safe to remove the testpmd part? If so why was the changes for at first place?
And can you please reply to the questions on the testpmd patch for the record?
Thanks,
ferruh
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-05-04 8:20 ` [dpdk-dev] [dpdk-stable] " Ferruh Yigit
@ 2021-05-04 9:12 ` Gregory Etelson
2021-05-05 17:45 ` Ferruh Yigit
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-05-04 9:12 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Matan Azrad, Ori Kam, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler
Hello Ferruh,
[:snip:]
> Hi Gregory,
>
> The were some questions around testpmd part of this patch in previous
> version, they are not answered and this version is dropping the testpmd
> part.
>
The tunnel offload API allows application to place tunnel offload elements at any valid location in a flow rule.
Current testpmd code places tunnel offload items
at the beginning of pattern array and tunnel offload actions at the beginning of actions array.
The testpmd patch in v1 & v2 changed location of tunnel offload elements in a flow rule.
> Is it safe to remove the testpmd part? If so why was the changes for at first
> place?
>
That patch was not a fix - it emphasized general usage of the tunnel offload API.
Current testpmd code works fine.
> And can you please reply to the questions on the testpmd patch for the
> record?
>
I'll add my replies.
> Thanks,
> Ferruh
Regards,
Gregory
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-05-04 9:12 ` Gregory Etelson
@ 2021-05-05 17:45 ` Ferruh Yigit
2021-05-06 6:08 ` Gregory Etelson
0 siblings, 1 reply; 27+ messages in thread
From: Ferruh Yigit @ 2021-05-05 17:45 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: Matan Azrad, Ori Kam, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler
On 5/4/2021 10:12 AM, Gregory Etelson wrote:
> Hello Ferruh,
>
> [:snip:]
>
>> Hi Gregory,
>>
>> The were some questions around testpmd part of this patch in previous
>> version, they are not answered and this version is dropping the testpmd
>> part.
>>
>
> The tunnel offload API allows application to place tunnel offload elements at any valid location in a flow rule.
How PMD should detect the tunnel offload elements?
And which API are we talking about, is there enough documentation in the API
about the location of the tunnel offload elements, or should we clarify it more?
> Current testpmd code places tunnel offload items
> at the beginning of pattern array and tunnel offload actions at the beginning of actions array.
Got it, so this patch is removing false expectation (about the location of the
tunnel offload elements in flow rule) in the PMD, right?
As far as I understand testpmd already satisfies this false expectation, so the
problem should not be visible with testpmd.
Was there a visible user impact, like any application failing because of this
false expectation, or is this theoretical fix to be correct with API contract?
btw, I can see 'MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL' still checked if it in the first
location of the items (items[0].type), in 'flow_dv_validate()', 'mlx5_flow_dv.c'
https://git.dpdk.org/dpdk/tree/drivers/net/mlx5/mlx5_flow_dv.c?h=v21.05-rc1#n6298
Can you please confirm that this is expected?
> The testpmd patch in v1 & v2 changed location of tunnel offload elements in a flow rule.
>
>> Is it safe to remove the testpmd part? If so why was the changes for at first
>> place?
>>
>
> That patch was not a fix - it emphasized general usage of the tunnel offload API.
> Current testpmd code works fine.
>
>> And can you please reply to the questions on the testpmd patch for the
>> record?
>>
>
> I'll add my replies.
>
>> Thanks,
>> Ferruh
>
> Regards,
> Gregory
>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-05-05 17:45 ` Ferruh Yigit
@ 2021-05-06 6:08 ` Gregory Etelson
2021-05-06 7:40 ` Gregory Etelson
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-05-06 6:08 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Matan Azrad, Ori Kam, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler
Hello Ferruh,
> >> The were some questions around testpmd part of this patch in previous
> >> version, they are not answered and this version is dropping the
> >> testpmd part.
> >>
> >
> > The tunnel offload API allows application to place tunnel offload elements
> at any valid location in a flow rule.
>
> How PMD should detect the tunnel offload elements?
>
Flow elements in tunnel offload rule can be logically divided into two parts:
- application elements that reflect application logic
- tunnel offload related elements. These flow elements are selected by PMD
to implement tunnel offload with respect to underlying hardware.
Application obtains these actions and items from PMD with
rte_flow_tunnel_decap_and_set() and rte_flow_tunnel_match().
Application combines both parts into a flow rule and sends it to PMD.
> And which API are we talking about, is there enough documentation in the
> API about the location of the tunnel offload elements, or should we clarify it
> more?
>
The tunnel offload API was introduced here:
commit 9ec0f97e02e1 ("ethdev: add tunnel offload model").
There is a detailed explanation with examples how the API works.
> > Current testpmd code places tunnel offload items at the beginning of
> > pattern array and tunnel offload actions at the beginning of actions array.
>
> Got it, so this patch is removing false expectation (about the location of the
> tunnel offload elements in flow rule) in the PMD, right?
>
Correct. Location of flow elements supplied by PMD in a rule is not important.
> As far as I understand testpmd already satisfies this false expectation, so
> the problem should not be visible with testpmd.
> Was there a visible user impact, like any application failing because of this
> false expectation, or is this theoretical fix to be correct with API contract?
>
There is no issue with the testpmd code.
The bug was discovered by OVS.
> btw, I can see 'MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL' still checked if it in
> the first location of the items (items[0].type), in 'flow_dv_validate()',
> 'mlx5_flow_dv.c'
> https://git.dpdk.org/dpdk/tree/drivers/net/mlx5/mlx5_flow_dv.c?h=v21.0
> 5-rc1#n6298
> Can you please confirm that this is expected?
>
The `items` variable in that function iterates on pattern array:
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
In this scenario,
case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL:
if (items[0].type !=
(typeof(items[0].type))MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL)
compares items with itself.
> > The testpmd patch in v1 & v2 changed location of tunnel offload elements
> in a flow rule.
> >
> >> Is it safe to remove the testpmd part? If so why was the changes for
> >> at first place?
> >>
> >
> > That patch was not a fix - it emphasized general usage of the tunnel
> offload API.
> > Current testpmd code works fine.
> >
> >> And can you please reply to the questions on the testpmd patch for
> >> the record?
> >>
> >
> > I'll add my replies.
> >
> >> Thanks,
> >> Ferruh
> >
> > Regards,
> > Gregory
> >
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [dpdk-stable] [PATCH v3] net/mlx5: fix tunnel offload private items location
2021-05-06 6:08 ` Gregory Etelson
@ 2021-05-06 7:40 ` Gregory Etelson
2021-05-06 7:59 ` [dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP Arthas
0 siblings, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-05-06 7:40 UTC (permalink / raw)
To: Ferruh Yigit, dev
Cc: Matan Azrad, Ori Kam, Raslan Darawsheh, stable, Slava Ovsiienko,
Shahaf Shuler
Hello Ferruh,
[:snip:]
> > btw, I can see 'MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL' still checked if it in
> > the first location of the items (items[0].type), in
> > 'flow_dv_validate()', 'mlx5_flow_dv.c'
> >
> https://git.dpdk.org/dpdk/tree/drivers/net/mlx5/mlx5_flow_dv.c?h=v21.0
> > 5-rc1#n6298
> > Can you please confirm that this is expected?
> >
>
> The `items` variable in that function iterates on pattern array:
>
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
>
> In this scenario,
>
> case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL:
> if (items[0].type !=
> (typeof(items[0].type))MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL)
>
> compares items with itself.
>
I'll post an updated patch.
Regards,
Gregory
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP
2021-05-06 7:40 ` Gregory Etelson
@ 2021-05-06 7:59 ` Arthas
2021-05-06 10:55 ` Slava Ovsiienko
0 siblings, 1 reply; 27+ messages in thread
From: Arthas @ 2021-05-06 7:59 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: Matan Azrad, Ori Kam, RaslanDarawsheh, stable, SlavaOvsiienko,
Shahaf Shuler
Hareware: CX5/CX6 DX + Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz DPDK version: 19.11.8/20.11/21.05-rc1&2
testpmd with case:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end actions count /drop end
testpmd> flow list 0
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 UDP => COUNT DROP
1 0 0 i-- ETH IPV4 UDP => COUNT DROP
2 0 0 i-- ETH IPV4 UDP => COUNT DROP
or
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end actions count / rss / end
testpmd> flow list 0
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 UDP => COUNT RSS
1 0 0 i-- ETH IPV4 UDP => COUNT RSS
2 0 0 i-- ETH IPV4 UDP => COUNT RSS
testpmd>
as soon as NIC create more than 1 flow , CX5/CX6-dx NIC will increment 'rx_phy_discard_packets'.
only 1 flow no problem!
Is this a CX5/CX6-DX hardware issue?
or Is it a DPDK mlx5 pmd bugs?
Best Regards!
KANG
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP
2021-05-06 7:59 ` [dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP Arthas
@ 2021-05-06 10:55 ` Slava Ovsiienko
0 siblings, 0 replies; 27+ messages in thread
From: Slava Ovsiienko @ 2021-05-06 10:55 UTC (permalink / raw)
To: Arthas, Gregory Etelson, dev
Cc: Matan Azrad, Ori Kam, Raslan Darawsheh, Shahaf Shuler
Hi, Kang
There are have some questions to clarify:
- what Is packet packet rate (in packet-per-second)?
* what Is packet size?
* do you use the switchdev configuration (E-Switch)?
* could you try create all flows in group 1 (and have the first flow in group 0 forwarding all the traffic to group 1) ?
With best regards,
Slava
From: Arthas <kangzy1982@qq.com>
Sent: Thursday, May 6, 2021 11:00
To: Gregory Etelson <getelson@nvidia.com>; dev@dpdk.org
Cc: Matan Azrad <matan@nvidia.com>; Ori Kam <orika@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>; stable@dpdk.org; Slava Ovsiienko <viacheslavo@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
Subject: [dpdk-dev] net/mlx5: mellanox cx5/cx6-dx increment "rx_phy_discard_packets" with DPDK rte_flow actions COUNT & DROP
Hareware: CX5/CX6 DX + Intel(R) Xeon(R) Platinum 9242 CPU @ 2.30GHz
DPDK version: 19.11.8/20.11/21.05-rc1&2
testpmd with case:
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end actions count / drop / end
testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end actions count /drop end
testpmd> flow list 0
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 UDP => COUNT DROP
1 0 0 i-- ETH IPV4 UDP => COUNT DROP
2 0 0 i-- ETH IPV4 UDP => COUNT DROP
or
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 53 / end actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp src is 53 / end actions count / rss / end
testpmd> flow create 0 ingress pattern eth / ipv4 / udp / end actions count / rss / end
testpmd> flow list 0
ID Group Prio Attr Rule
0 0 0 i-- ETH IPV4 UDP => COUNT RSS
1 0 0 i-- ETH IPV4 UDP => COUNT RSS
2 0 0 i-- ETH IPV4 UDP => COUNT RSS
testpmd>
as soon as NIC create more than 1 flow , CX5/CX6-dx NIC will increment 'rx_phy_discard_packets'.
only 1 flow no problem!
Is this a CX5/CX6-DX hardware issue?
or Is it a DPDK mlx5 pmd bugs?
Best Regards!
KANG
^ permalink raw reply [flat|nested] 27+ messages in thread
* [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location
2021-04-19 13:02 [dpdk-dev] [PATCH 1/2] net/mlx5: fix tunnel offload private items location Gregory Etelson
` (2 preceding siblings ...)
2021-05-02 8:08 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
@ 2021-05-06 9:57 ` Gregory Etelson
2021-05-10 12:29 ` Gregory Etelson
2021-05-11 22:16 ` Ferruh Yigit
3 siblings, 2 replies; 27+ messages in thread
From: Gregory Etelson @ 2021-05-06 9:57 UTC (permalink / raw)
To: dev
Cc: getelson, matan, orika, rasland, ferruh.yigit, stable,
Viacheslav Ovsiienko, Shahaf Shuler
Tunnel offload API requires application to query PMD for specific flow
items and actions. Application uses these PMD specific elements to
build flow rules according to the tunnel offload model.
The model does not restrict private elements location in a flow rule,
but the current MLX5 PMD implementation expects that tunnel offload
rule will begin with PMD specific elements.
The patch removes that placement limitation.
Cc: stable@dpdk.org
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v2: Update the patch comment.
v3: Remove testpmd patch from the series.
v4: Remove redundant verifications in flow_dv_validate.
---
drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++------
drivers/net/mlx5/mlx5_flow.h | 46 ++++++++++-------
drivers/net/mlx5/mlx5_flow_dv.c | 88 ++++++++++++++++-----------------
3 files changed, 102 insertions(+), 80 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 3194cd5633..f464271d42 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -50,6 +50,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_action *app_actions,
uint32_t flow_idx,
+ const struct mlx5_flow_tunnel *tunnel,
struct tunnel_default_miss_ctx *ctx,
struct rte_flow_error *error);
static struct mlx5_flow_tunnel *
@@ -5958,22 +5959,14 @@ flow_create_split_outer(struct rte_eth_dev *dev,
return ret;
}
-static struct mlx5_flow_tunnel *
-flow_tunnel_from_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[])
+static inline struct mlx5_flow_tunnel *
+flow_tunnel_from_rule(const struct mlx5_flow *flow)
{
struct mlx5_flow_tunnel *tunnel;
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wcast-qual"
- if (is_flow_tunnel_match_rule(dev, attr, items, actions))
- tunnel = (struct mlx5_flow_tunnel *)items[0].spec;
- else if (is_flow_tunnel_steer_rule(dev, attr, items, actions))
- tunnel = (struct mlx5_flow_tunnel *)actions[0].conf;
- else
- tunnel = NULL;
+ tunnel = (typeof(tunnel))flow->tunnel;
#pragma GCC diagnostic pop
return tunnel;
@@ -6168,12 +6161,11 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
error);
if (ret < 0)
goto error;
- if (is_flow_tunnel_steer_rule(dev, attr,
- buf->entry[i].pattern,
- p_actions_rx)) {
+ if (is_flow_tunnel_steer_rule(wks->flows[0].tof_type)) {
ret = flow_tunnel_add_default_miss(dev, flow, attr,
p_actions_rx,
idx,
+ wks->flows[0].tunnel,
&default_miss_ctx,
error);
if (ret < 0) {
@@ -6237,7 +6229,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
}
flow_rxq_flags_set(dev, flow);
rte_free(translated_actions);
- tunnel = flow_tunnel_from_rule(dev, attr, items, actions);
+ tunnel = flow_tunnel_from_rule(wks->flows);
if (tunnel) {
flow->tunnel = 1;
flow->tunnel_id = tunnel->tunnel_id;
@@ -8152,6 +8144,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id, uint32_t domains)
return ret;
}
+const struct mlx5_flow_tunnel *
+mlx5_get_tof(const struct rte_flow_item *item,
+ const struct rte_flow_action *action,
+ enum mlx5_tof_rule_type *rule_type)
+{
+ for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
+ if (item->type == (typeof(item->type))
+ MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) {
+ *rule_type = MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
+ return flow_items_to_tunnel(item);
+ }
+ }
+ for (; action->conf != RTE_FLOW_ACTION_TYPE_END; action++) {
+ if (action->type == (typeof(action->type))
+ MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) {
+ *rule_type = MLX5_TUNNEL_OFFLOAD_SET_RULE;
+ return flow_actions_to_tunnel(action);
+ }
+ }
+ return NULL;
+}
+
/**
* tunnel offload functionalilty is defined for DV environment only
*/
@@ -8182,13 +8196,13 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_action *app_actions,
uint32_t flow_idx,
+ const struct mlx5_flow_tunnel *tunnel,
struct tunnel_default_miss_ctx *ctx,
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flow *dev_flow;
struct rte_flow_attr miss_attr = *attr;
- const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf;
const struct rte_flow_item miss_items[2] = {
{
.type = RTE_FLOW_ITEM_TYPE_ETH,
@@ -8274,6 +8288,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev *dev,
dev_flow->flow = flow;
dev_flow->external = true;
dev_flow->tunnel = tunnel;
+ dev_flow->tof_type = MLX5_TUNNEL_OFFLOAD_MISS_RULE;
/* Subflow object was created, we must include one in the list. */
SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx,
dev_flow->handle, next);
@@ -8887,6 +8902,7 @@ flow_tunnel_add_default_miss(__rte_unused struct rte_eth_dev *dev,
__rte_unused const struct rte_flow_attr *attr,
__rte_unused const struct rte_flow_action *actions,
__rte_unused uint32_t flow_idx,
+ __rte_unused const struct mlx5_flow_tunnel *tunnel,
__rte_unused struct tunnel_default_miss_ctx *ctx,
__rte_unused struct rte_flow_error *error)
{
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 5365699426..04c8806bf6 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -819,6 +819,16 @@ struct mlx5_flow_verbs_workspace {
/** Maximal number of device sub-flows supported. */
#define MLX5_NUM_MAX_DEV_FLOWS 32
+/**
+ * tunnel offload rules type
+ */
+enum mlx5_tof_rule_type {
+ MLX5_TUNNEL_OFFLOAD_NONE = 0,
+ MLX5_TUNNEL_OFFLOAD_SET_RULE,
+ MLX5_TUNNEL_OFFLOAD_MATCH_RULE,
+ MLX5_TUNNEL_OFFLOAD_MISS_RULE,
+};
+
/** Device flow structure. */
__extension__
struct mlx5_flow {
@@ -854,6 +864,7 @@ struct mlx5_flow {
struct mlx5_flow_handle *handle;
uint32_t handle_idx; /* Index of the mlx5 flow handle memory. */
const struct mlx5_flow_tunnel *tunnel;
+ enum mlx5_tof_rule_type tof_type;
};
/* Flow meter state. */
@@ -949,10 +960,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev)
}
static inline bool
-is_tunnel_offload_active(struct rte_eth_dev *dev)
+is_tunnel_offload_active(const struct rte_eth_dev *dev)
{
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
- struct mlx5_priv *priv = dev->data->dev_private;
+ const struct mlx5_priv *priv = dev->data->dev_private;
return !!priv->config.dv_miss_info;
#else
RTE_SET_USED(dev);
@@ -961,23 +972,15 @@ is_tunnel_offload_active(struct rte_eth_dev *dev)
}
static inline bool
-is_flow_tunnel_match_rule(__rte_unused struct rte_eth_dev *dev,
- __rte_unused const struct rte_flow_attr *attr,
- __rte_unused const struct rte_flow_item items[],
- __rte_unused const struct rte_flow_action actions[])
+is_flow_tunnel_match_rule(enum mlx5_tof_rule_type tof_rule_type)
{
- return (items[0].type == (typeof(items[0].type))
- MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL);
+ return tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
}
static inline bool
-is_flow_tunnel_steer_rule(__rte_unused struct rte_eth_dev *dev,
- __rte_unused const struct rte_flow_attr *attr,
- __rte_unused const struct rte_flow_item items[],
- __rte_unused const struct rte_flow_action actions[])
+is_flow_tunnel_steer_rule(enum mlx5_tof_rule_type tof_rule_type)
{
- return (actions[0].type == (typeof(actions[0].type))
- MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET);
+ return tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE;
}
static inline const struct mlx5_flow_tunnel *
@@ -1273,11 +1276,10 @@ struct flow_grp_info {
static inline bool
tunnel_use_standard_attr_group_translate
- (struct rte_eth_dev *dev,
- const struct mlx5_flow_tunnel *tunnel,
+ (const struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- const struct rte_flow_action actions[])
+ const struct mlx5_flow_tunnel *tunnel,
+ enum mlx5_tof_rule_type tof_rule_type)
{
bool verdict;
@@ -1293,7 +1295,7 @@ tunnel_use_standard_attr_group_translate
* method
*/
verdict = !attr->group &&
- is_flow_tunnel_steer_rule(dev, attr, items, actions);
+ is_flow_tunnel_steer_rule(tof_rule_type);
} else {
/*
* non-tunnel group translation uses standard method for
@@ -1681,4 +1683,10 @@ int mlx5_flow_create_def_policy(struct rte_eth_dev *dev);
void mlx5_flow_destroy_def_policy(struct rte_eth_dev *dev);
void flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
struct mlx5_flow_handle *dev_handle);
+const struct mlx5_flow_tunnel *
+mlx5_get_tof(const struct rte_flow_item *items,
+ const struct rte_flow_action *actions,
+ enum mlx5_tof_rule_type *rule_type);
+
+
#endif /* RTE_PMD_MLX5_FLOW_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 70e8d0b113..10ca342edc 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -6627,10 +6627,12 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
uint32_t rw_act_num = 0;
uint64_t is_root;
const struct mlx5_flow_tunnel *tunnel;
+ enum mlx5_tof_rule_type tof_rule_type;
struct flow_grp_info grp_info = {
.external = !!external,
.transfer = !!attr->transfer,
.fdb_def_rule = !!priv->fdb_def_rule,
+ .std_tbl_fix = true,
};
const struct rte_eth_hairpin_conf *conf;
const struct rte_flow_item *rule_items = items;
@@ -6638,23 +6640,22 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
if (items == NULL)
return -1;
- if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
- tunnel = flow_items_to_tunnel(items);
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
- MLX5_FLOW_ACTION_DECAP;
- } else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) {
- tunnel = flow_actions_to_tunnel(actions);
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
- } else {
- tunnel = NULL;
+ tunnel = is_tunnel_offload_active(dev) ?
+ mlx5_get_tof(items, actions, &tof_rule_type) : NULL;
+ if (tunnel) {
+ if (priv->representor)
+ return rte_flow_error_set
+ (error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL, "decap not supported for VF representor");
+ if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE)
+ action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
+ else if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE)
+ action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
+ MLX5_FLOW_ACTION_DECAP;
+ grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
+ (dev, attr, tunnel, tof_rule_type);
}
- if (tunnel && priv->representor)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "decap not supported "
- "for VF representor");
- grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
- (dev, tunnel, attr, items, actions);
ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info, error);
if (ret < 0)
return ret;
@@ -6668,15 +6669,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL, "item not supported");
switch (type) {
- case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL:
- if (items[0].type != (typeof(items[0].type))
- MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL)
- return rte_flow_error_set
- (error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM,
- NULL, "MLX5 private items "
- "must be the first");
- break;
case RTE_FLOW_ITEM_TYPE_VOID:
break;
case RTE_FLOW_ITEM_TYPE_PORT_ID:
@@ -6975,6 +6967,11 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
if (ret < 0)
return ret;
break;
+ case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL:
+ /* tunnel offload item was processed before
+ * list it here as a supported type
+ */
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -7516,17 +7513,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
action_flags |= MLX5_FLOW_ACTION_SAMPLE;
++actions_n;
break;
- case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET:
- if (actions[0].type != (typeof(actions[0].type))
- MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET)
- return rte_flow_error_set
- (error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- NULL, "MLX5 private action "
- "must be the first");
-
- action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
- break;
case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
ret = flow_dv_validate_action_modify_field(dev,
action_flags,
@@ -7551,6 +7537,11 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
action_flags |= MLX5_FLOW_ACTION_CT;
break;
+ case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET:
+ /* tunnel offload action was processed before
+ * list it here as a supported type
+ */
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ACTION,
@@ -12035,13 +12026,14 @@ flow_dv_translate(struct rte_eth_dev *dev,
int tmp_actions_n = 0;
uint32_t table;
int ret = 0;
- const struct mlx5_flow_tunnel *tunnel;
+ const struct mlx5_flow_tunnel *tunnel = NULL;
struct flow_grp_info grp_info = {
.external = !!dev_flow->external,
.transfer = !!attr->transfer,
.fdb_def_rule = !!priv->fdb_def_rule,
.skip_scale = dev_flow->skip_scale &
(1 << MLX5_SCALE_FLOW_GROUP_BIT),
+ .std_tbl_fix = true,
};
const struct rte_flow_item *head_item = items;
@@ -12057,15 +12049,21 @@ flow_dv_translate(struct rte_eth_dev *dev,
MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
/* update normal path action resource into last index of array */
sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM - 1];
- tunnel = is_flow_tunnel_match_rule(dev, attr, items, actions) ?
- flow_items_to_tunnel(items) :
- is_flow_tunnel_steer_rule(dev, attr, items, actions) ?
- flow_actions_to_tunnel(actions) :
- dev_flow->tunnel ? dev_flow->tunnel : NULL;
+ if (is_tunnel_offload_active(dev)) {
+ if (dev_flow->tunnel) {
+ RTE_VERIFY(dev_flow->tof_type ==
+ MLX5_TUNNEL_OFFLOAD_MISS_RULE);
+ tunnel = dev_flow->tunnel;
+ } else {
+ tunnel = mlx5_get_tof(items, actions,
+ &dev_flow->tof_type);
+ dev_flow->tunnel = tunnel;
+ }
+ grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
+ (dev, attr, tunnel, dev_flow->tof_type);
+ }
mhdr_res->ft_type = attr->egress ? MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
- grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
- (dev, tunnel, attr, items, actions);
ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table,
&grp_info, error);
if (ret)
@@ -12075,7 +12073,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
/* number of actions must be set to 0 in case of dirty stack. */
mhdr_res->actions_num = 0;
- if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
+ if (is_flow_tunnel_match_rule(dev_flow->tof_type)) {
/*
* do not add decap action if match rule drops packet
* HW rejects rules with decap & drop
--
2.31.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location
2021-05-06 9:57 ` [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location Gregory Etelson
@ 2021-05-10 12:29 ` Gregory Etelson
2021-05-10 16:10 ` Ferruh Yigit
2021-05-11 22:16 ` Ferruh Yigit
1 sibling, 1 reply; 27+ messages in thread
From: Gregory Etelson @ 2021-05-10 12:29 UTC (permalink / raw)
To: ferruh.yigit; +Cc: Raslan Darawsheh, Slava Ovsiienko, dev
Hello Ferruh,
The patch has failed ci/Intel-compilation test.
(http://mails.dpdk.org/archives/test-report/2021-May/193327.html)
I've checked that with Thomas. He confirmed that was a known fault and
should not affect the current patch.
Can we proceed with the patch integration ?
Thank you.
Regards,
Gregory
> -----Original Message-----
> From: Gregory Etelson <getelson@nvidia.com>
> Sent: Thursday, May 6, 2021 12:58
> To: dev@dpdk.org
> Cc: Gregory Etelson <getelson@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
> <rasland@nvidia.com>; ferruh.yigit@intel.com; stable@dpdk.org; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Shahaf Shuler
> <shahafs@nvidia.com>
> Subject: [PATCH v4] net/mlx5: fix tunnel offload private items location
>
> Tunnel offload API requires application to query PMD for specific flow items
> and actions. Application uses these PMD specific elements to build flow
> rules according to the tunnel offload model.
> The model does not restrict private elements location in a flow rule, but the
> current MLX5 PMD implementation expects that tunnel offload rule will
> begin with PMD specific elements.
> The patch removes that placement limitation.
>
> Cc: stable@dpdk.org
>
> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---
> v2: Update the patch comment.
> v3: Remove testpmd patch from the series.
> v4: Remove redundant verifications in flow_dv_validate.
> ---
> drivers/net/mlx5/mlx5_flow.c | 48 ++++++++++++------
> drivers/net/mlx5/mlx5_flow.h | 46 ++++++++++-------
> drivers/net/mlx5/mlx5_flow_dv.c | 88 ++++++++++++++++-----------------
> 3 files changed, 102 insertions(+), 80 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index 3194cd5633..f464271d42 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -50,6 +50,7 @@ flow_tunnel_add_default_miss(struct rte_eth_dev
> *dev,
> const struct rte_flow_attr *attr,
> const struct rte_flow_action *app_actions,
> uint32_t flow_idx,
> + const struct mlx5_flow_tunnel *tunnel,
> struct tunnel_default_miss_ctx *ctx,
> struct rte_flow_error *error);
> static struct mlx5_flow_tunnel *
> @@ -5958,22 +5959,14 @@ flow_create_split_outer(struct rte_eth_dev
> *dev,
> return ret;
> }
>
> -static struct mlx5_flow_tunnel *
> -flow_tunnel_from_rule(struct rte_eth_dev *dev,
> - const struct rte_flow_attr *attr,
> - const struct rte_flow_item items[],
> - const struct rte_flow_action actions[])
> +static inline struct mlx5_flow_tunnel * flow_tunnel_from_rule(const
> +struct mlx5_flow *flow)
> {
> struct mlx5_flow_tunnel *tunnel;
>
> #pragma GCC diagnostic push
> #pragma GCC diagnostic ignored "-Wcast-qual"
> - if (is_flow_tunnel_match_rule(dev, attr, items, actions))
> - tunnel = (struct mlx5_flow_tunnel *)items[0].spec;
> - else if (is_flow_tunnel_steer_rule(dev, attr, items, actions))
> - tunnel = (struct mlx5_flow_tunnel *)actions[0].conf;
> - else
> - tunnel = NULL;
> + tunnel = (typeof(tunnel))flow->tunnel;
> #pragma GCC diagnostic pop
>
> return tunnel;
> @@ -6168,12 +6161,11 @@ flow_list_create(struct rte_eth_dev *dev,
> uint32_t *list,
> error);
> if (ret < 0)
> goto error;
> - if (is_flow_tunnel_steer_rule(dev, attr,
> - buf->entry[i].pattern,
> - p_actions_rx)) {
> + if (is_flow_tunnel_steer_rule(wks->flows[0].tof_type)) {
> ret = flow_tunnel_add_default_miss(dev, flow, attr,
> p_actions_rx,
> idx,
> + wks-
> >flows[0].tunnel,
> &default_miss_ctx,
> error);
> if (ret < 0) {
> @@ -6237,7 +6229,7 @@ flow_list_create(struct rte_eth_dev *dev,
> uint32_t *list,
> }
> flow_rxq_flags_set(dev, flow);
> rte_free(translated_actions);
> - tunnel = flow_tunnel_from_rule(dev, attr, items, actions);
> + tunnel = flow_tunnel_from_rule(wks->flows);
> if (tunnel) {
> flow->tunnel = 1;
> flow->tunnel_id = tunnel->tunnel_id;
> @@ -8152,6 +8144,28 @@ int rte_pmd_mlx5_sync_flow(uint16_t port_id,
> uint32_t domains)
> return ret;
> }
>
> +const struct mlx5_flow_tunnel *
> +mlx5_get_tof(const struct rte_flow_item *item,
> + const struct rte_flow_action *action,
> + enum mlx5_tof_rule_type *rule_type) {
> + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
> + if (item->type == (typeof(item->type))
> + MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL) {
> + *rule_type =
> MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
> + return flow_items_to_tunnel(item);
> + }
> + }
> + for (; action->conf != RTE_FLOW_ACTION_TYPE_END; action++) {
> + if (action->type == (typeof(action->type))
> +
> MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET) {
> + *rule_type = MLX5_TUNNEL_OFFLOAD_SET_RULE;
> + return flow_actions_to_tunnel(action);
> + }
> + }
> + return NULL;
> +}
> +
> /**
> * tunnel offload functionalilty is defined for DV environment only
> */
> @@ -8182,13 +8196,13 @@ flow_tunnel_add_default_miss(struct
> rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
> const struct rte_flow_action *app_actions,
> uint32_t flow_idx,
> + const struct mlx5_flow_tunnel *tunnel,
> struct tunnel_default_miss_ctx *ctx,
> struct rte_flow_error *error)
> {
> struct mlx5_priv *priv = dev->data->dev_private;
> struct mlx5_flow *dev_flow;
> struct rte_flow_attr miss_attr = *attr;
> - const struct mlx5_flow_tunnel *tunnel = app_actions[0].conf;
> const struct rte_flow_item miss_items[2] = {
> {
> .type = RTE_FLOW_ITEM_TYPE_ETH,
> @@ -8274,6 +8288,7 @@ flow_tunnel_add_default_miss(struct
> rte_eth_dev *dev,
> dev_flow->flow = flow;
> dev_flow->external = true;
> dev_flow->tunnel = tunnel;
> + dev_flow->tof_type = MLX5_TUNNEL_OFFLOAD_MISS_RULE;
> /* Subflow object was created, we must include one in the list. */
> SILIST_INSERT(&flow->dev_handles, dev_flow->handle_idx,
> dev_flow->handle, next);
> @@ -8887,6 +8902,7 @@ flow_tunnel_add_default_miss(__rte_unused
> struct rte_eth_dev *dev,
> __rte_unused const struct rte_flow_attr *attr,
> __rte_unused const struct rte_flow_action
> *actions,
> __rte_unused uint32_t flow_idx,
> + __rte_unused const struct mlx5_flow_tunnel
> *tunnel,
> __rte_unused struct tunnel_default_miss_ctx
> *ctx,
> __rte_unused struct rte_flow_error *error) { diff
> --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
> index 5365699426..04c8806bf6 100644
> --- a/drivers/net/mlx5/mlx5_flow.h
> +++ b/drivers/net/mlx5/mlx5_flow.h
> @@ -819,6 +819,16 @@ struct mlx5_flow_verbs_workspace {
> /** Maximal number of device sub-flows supported. */ #define
> MLX5_NUM_MAX_DEV_FLOWS 32
>
> +/**
> + * tunnel offload rules type
> + */
> +enum mlx5_tof_rule_type {
> + MLX5_TUNNEL_OFFLOAD_NONE = 0,
> + MLX5_TUNNEL_OFFLOAD_SET_RULE,
> + MLX5_TUNNEL_OFFLOAD_MATCH_RULE,
> + MLX5_TUNNEL_OFFLOAD_MISS_RULE,
> +};
> +
> /** Device flow structure. */
> __extension__
> struct mlx5_flow {
> @@ -854,6 +864,7 @@ struct mlx5_flow {
> struct mlx5_flow_handle *handle;
> uint32_t handle_idx; /* Index of the mlx5 flow handle memory. */
> const struct mlx5_flow_tunnel *tunnel;
> + enum mlx5_tof_rule_type tof_type;
> };
>
> /* Flow meter state. */
> @@ -949,10 +960,10 @@ mlx5_tunnel_hub(struct rte_eth_dev *dev) }
>
> static inline bool
> -is_tunnel_offload_active(struct rte_eth_dev *dev)
> +is_tunnel_offload_active(const struct rte_eth_dev *dev)
> {
> #ifdef HAVE_IBV_FLOW_DV_SUPPORT
> - struct mlx5_priv *priv = dev->data->dev_private;
> + const struct mlx5_priv *priv = dev->data->dev_private;
> return !!priv->config.dv_miss_info;
> #else
> RTE_SET_USED(dev);
> @@ -961,23 +972,15 @@ is_tunnel_offload_active(struct rte_eth_dev
> *dev) }
>
> static inline bool
> -is_flow_tunnel_match_rule(__rte_unused struct rte_eth_dev *dev,
> - __rte_unused const struct rte_flow_attr *attr,
> - __rte_unused const struct rte_flow_item items[],
> - __rte_unused const struct rte_flow_action
> actions[])
> +is_flow_tunnel_match_rule(enum mlx5_tof_rule_type tof_rule_type)
> {
> - return (items[0].type == (typeof(items[0].type))
> - MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL);
> + return tof_rule_type == MLX5_TUNNEL_OFFLOAD_MATCH_RULE;
> }
>
> static inline bool
> -is_flow_tunnel_steer_rule(__rte_unused struct rte_eth_dev *dev,
> - __rte_unused const struct rte_flow_attr *attr,
> - __rte_unused const struct rte_flow_item items[],
> - __rte_unused const struct rte_flow_action
> actions[])
> +is_flow_tunnel_steer_rule(enum mlx5_tof_rule_type tof_rule_type)
> {
> - return (actions[0].type == (typeof(actions[0].type))
> -
> MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET);
> + return tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE;
> }
>
> static inline const struct mlx5_flow_tunnel * @@ -1273,11 +1276,10 @@
> struct flow_grp_info {
>
> static inline bool
> tunnel_use_standard_attr_group_translate
> - (struct rte_eth_dev *dev,
> - const struct mlx5_flow_tunnel *tunnel,
> + (const struct rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
> - const struct rte_flow_item items[],
> - const struct rte_flow_action actions[])
> + const struct mlx5_flow_tunnel *tunnel,
> + enum mlx5_tof_rule_type tof_rule_type)
> {
> bool verdict;
>
> @@ -1293,7 +1295,7 @@ tunnel_use_standard_attr_group_translate
> * method
> */
> verdict = !attr->group &&
> - is_flow_tunnel_steer_rule(dev, attr, items,
> actions);
> + is_flow_tunnel_steer_rule(tof_rule_type);
> } else {
> /*
> * non-tunnel group translation uses standard method for
> @@ -1681,4 +1683,10 @@ int mlx5_flow_create_def_policy(struct
> rte_eth_dev *dev); void mlx5_flow_destroy_def_policy(struct rte_eth_dev
> *dev); void flow_drv_rxq_flags_set(struct rte_eth_dev *dev,
> struct mlx5_flow_handle *dev_handle);
> +const struct mlx5_flow_tunnel *
> +mlx5_get_tof(const struct rte_flow_item *items,
> + const struct rte_flow_action *actions,
> + enum mlx5_tof_rule_type *rule_type);
> +
> +
> #endif /* RTE_PMD_MLX5_FLOW_H_ */
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c index 70e8d0b113..10ca342edc
> 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -6627,10 +6627,12 @@ flow_dv_validate(struct rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
> uint32_t rw_act_num = 0;
> uint64_t is_root;
> const struct mlx5_flow_tunnel *tunnel;
> + enum mlx5_tof_rule_type tof_rule_type;
> struct flow_grp_info grp_info = {
> .external = !!external,
> .transfer = !!attr->transfer,
> .fdb_def_rule = !!priv->fdb_def_rule,
> + .std_tbl_fix = true,
> };
> const struct rte_eth_hairpin_conf *conf;
> const struct rte_flow_item *rule_items = items; @@ -6638,23
> +6640,22 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct
> rte_flow_attr *attr,
>
> if (items == NULL)
> return -1;
> - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
> - tunnel = flow_items_to_tunnel(items);
> - action_flags |= MLX5_FLOW_ACTION_TUNNEL_MATCH |
> - MLX5_FLOW_ACTION_DECAP;
> - } else if (is_flow_tunnel_steer_rule(dev, attr, items, actions)) {
> - tunnel = flow_actions_to_tunnel(actions);
> - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
> - } else {
> - tunnel = NULL;
> + tunnel = is_tunnel_offload_active(dev) ?
> + mlx5_get_tof(items, actions, &tof_rule_type) : NULL;
> + if (tunnel) {
> + if (priv->representor)
> + return rte_flow_error_set
> + (error, ENOTSUP,
> + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> + NULL, "decap not supported for VF
> representor");
> + if (tof_rule_type == MLX5_TUNNEL_OFFLOAD_SET_RULE)
> + action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
> + else if (tof_rule_type ==
> MLX5_TUNNEL_OFFLOAD_MATCH_RULE)
> + action_flags |=
> MLX5_FLOW_ACTION_TUNNEL_MATCH |
> + MLX5_FLOW_ACTION_DECAP;
> + grp_info.std_tbl_fix =
> tunnel_use_standard_attr_group_translate
> + (dev, attr, tunnel, tof_rule_type);
> }
> - if (tunnel && priv->representor)
> - return rte_flow_error_set(error, ENOTSUP,
> -
> RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
> - "decap not supported "
> - "for VF representor");
> - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
> - (dev, tunnel, attr, items, actions);
> ret = flow_dv_validate_attributes(dev, tunnel, attr, &grp_info,
> error);
> if (ret < 0)
> return ret;
> @@ -6668,15 +6669,6 @@ flow_dv_validate(struct rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
>
> RTE_FLOW_ERROR_TYPE_ITEM,
> NULL, "item not
> supported");
> switch (type) {
> - case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL:
> - if (items[0].type != (typeof(items[0].type))
> -
> MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL)
> - return rte_flow_error_set
> - (error, EINVAL,
> -
> RTE_FLOW_ERROR_TYPE_ITEM,
> - NULL, "MLX5 private items "
> - "must be the first");
> - break;
> case RTE_FLOW_ITEM_TYPE_VOID:
> break;
> case RTE_FLOW_ITEM_TYPE_PORT_ID:
> @@ -6975,6 +6967,11 @@ flow_dv_validate(struct rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
> if (ret < 0)
> return ret;
> break;
> + case MLX5_RTE_FLOW_ITEM_TYPE_TUNNEL:
> + /* tunnel offload item was processed before
> + * list it here as a supported type
> + */
> + break;
> default:
> return rte_flow_error_set(error, ENOTSUP,
>
> RTE_FLOW_ERROR_TYPE_ITEM,
> @@ -7516,17 +7513,6 @@ flow_dv_validate(struct rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
> action_flags |= MLX5_FLOW_ACTION_SAMPLE;
> ++actions_n;
> break;
> - case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET:
> - if (actions[0].type != (typeof(actions[0].type))
> -
> MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET)
> - return rte_flow_error_set
> - (error, EINVAL,
> -
> RTE_FLOW_ERROR_TYPE_ACTION,
> - NULL, "MLX5 private action "
> - "must be the first");
> -
> - action_flags |= MLX5_FLOW_ACTION_TUNNEL_SET;
> - break;
> case RTE_FLOW_ACTION_TYPE_MODIFY_FIELD:
> ret = flow_dv_validate_action_modify_field(dev,
>
> action_flags,
> @@ -7551,6 +7537,11 @@ flow_dv_validate(struct rte_eth_dev *dev,
> const struct rte_flow_attr *attr,
> return ret;
> action_flags |= MLX5_FLOW_ACTION_CT;
> break;
> + case MLX5_RTE_FLOW_ACTION_TYPE_TUNNEL_SET:
> + /* tunnel offload action was processed before
> + * list it here as a supported type
> + */
> + break;
> default:
> return rte_flow_error_set(error, ENOTSUP,
>
> RTE_FLOW_ERROR_TYPE_ACTION,
> @@ -12035,13 +12026,14 @@ flow_dv_translate(struct rte_eth_dev *dev,
> int tmp_actions_n = 0;
> uint32_t table;
> int ret = 0;
> - const struct mlx5_flow_tunnel *tunnel;
> + const struct mlx5_flow_tunnel *tunnel = NULL;
> struct flow_grp_info grp_info = {
> .external = !!dev_flow->external,
> .transfer = !!attr->transfer,
> .fdb_def_rule = !!priv->fdb_def_rule,
> .skip_scale = dev_flow->skip_scale &
> (1 << MLX5_SCALE_FLOW_GROUP_BIT),
> + .std_tbl_fix = true,
> };
> const struct rte_flow_item *head_item = items;
>
> @@ -12057,15 +12049,21 @@ flow_dv_translate(struct rte_eth_dev *dev,
>
> MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
> /* update normal path action resource into last index of array */
> sample_act = &mdest_res.sample_act[MLX5_MAX_DEST_NUM -
> 1];
> - tunnel = is_flow_tunnel_match_rule(dev, attr, items, actions) ?
> - flow_items_to_tunnel(items) :
> - is_flow_tunnel_steer_rule(dev, attr, items, actions) ?
> - flow_actions_to_tunnel(actions) :
> - dev_flow->tunnel ? dev_flow->tunnel : NULL;
> + if (is_tunnel_offload_active(dev)) {
> + if (dev_flow->tunnel) {
> + RTE_VERIFY(dev_flow->tof_type ==
> + MLX5_TUNNEL_OFFLOAD_MISS_RULE);
> + tunnel = dev_flow->tunnel;
> + } else {
> + tunnel = mlx5_get_tof(items, actions,
> + &dev_flow->tof_type);
> + dev_flow->tunnel = tunnel;
> + }
> + grp_info.std_tbl_fix =
> tunnel_use_standard_attr_group_translate
> + (dev, attr, tunnel, dev_flow-
> >tof_type);
> + }
> mhdr_res->ft_type = attr->egress ?
> MLX5DV_FLOW_TABLE_TYPE_NIC_TX :
>
> MLX5DV_FLOW_TABLE_TYPE_NIC_RX;
> - grp_info.std_tbl_fix = tunnel_use_standard_attr_group_translate
> - (dev, tunnel, attr, items, actions);
> ret = mlx5_flow_group_to_table(dev, tunnel, attr->group, &table,
> &grp_info, error);
> if (ret)
> @@ -12075,7 +12073,7 @@ flow_dv_translate(struct rte_eth_dev *dev,
> mhdr_res->ft_type = MLX5DV_FLOW_TABLE_TYPE_FDB;
> /* number of actions must be set to 0 in case of dirty stack. */
> mhdr_res->actions_num = 0;
> - if (is_flow_tunnel_match_rule(dev, attr, items, actions)) {
> + if (is_flow_tunnel_match_rule(dev_flow->tof_type)) {
> /*
> * do not add decap action if match rule drops packet
> * HW rejects rules with decap & drop
> --
> 2.31.1
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location
2021-05-10 12:29 ` Gregory Etelson
@ 2021-05-10 16:10 ` Ferruh Yigit
0 siblings, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2021-05-10 16:10 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Raslan Darawsheh, Slava Ovsiienko, dev
On 5/10/2021 1:29 PM, Gregory Etelson wrote:
> Hello Ferruh,
>
> The patch has failed ci/Intel-compilation test.
> (http://mails.dpdk.org/archives/test-report/2021-May/193327.html)
> I've checked that with Thomas. He confirmed that was a known fault and
> should not affect the current patch.
> Can we proceed with the patch integration ?
>
Hi Gregory,
Thanks for the new version and additional effort for clarification,
agree that CI issue is something else, patch is already assigned to Raslan and
we can proceed with his tree.
Thanks,
ferruh
> Thank you.
>
> Regards,
> Gregory
>
>> -----Original Message-----
>> From: Gregory Etelson <getelson@nvidia.com>
>> Sent: Thursday, May 6, 2021 12:58
>> To: dev@dpdk.org
>> Cc: Gregory Etelson <getelson@nvidia.com>; Matan Azrad
>> <matan@nvidia.com>; Ori Kam <orika@nvidia.com>; Raslan Darawsheh
>> <rasland@nvidia.com>; ferruh.yigit@intel.com; stable@dpdk.org; Slava
>> Ovsiienko <viacheslavo@nvidia.com>; Shahaf Shuler
>> <shahafs@nvidia.com>
>> Subject: [PATCH v4] net/mlx5: fix tunnel offload private items location
>>
>> Tunnel offload API requires application to query PMD for specific flow items
>> and actions. Application uses these PMD specific elements to build flow
>> rules according to the tunnel offload model.
>> The model does not restrict private elements location in a flow rule, but the
>> current MLX5 PMD implementation expects that tunnel offload rule will
>> begin with PMD specific elements.
>> The patch removes that placement limitation.
>>
>> Cc: stable@dpdk.org
>>
>> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>>
>> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
>> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
>> ---
>> v2: Update the patch comment.
>> v3: Remove testpmd patch from the series.
>> v4: Remove redundant verifications in flow_dv_validate.
<...>
^ permalink raw reply [flat|nested] 27+ messages in thread
* Re: [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location
2021-05-06 9:57 ` [dpdk-dev] [PATCH v4] net/mlx5: fix tunnel offload private items location Gregory Etelson
2021-05-10 12:29 ` Gregory Etelson
@ 2021-05-11 22:16 ` Ferruh Yigit
1 sibling, 0 replies; 27+ messages in thread
From: Ferruh Yigit @ 2021-05-11 22:16 UTC (permalink / raw)
To: Gregory Etelson, dev
Cc: matan, orika, rasland, stable, Viacheslav Ovsiienko, Shahaf Shuler
On 5/6/2021 10:57 AM, Gregory Etelson wrote:
> Tunnel offload API requires application to query PMD for specific flow
> items and actions. Application uses these PMD specific elements to
> build flow rules according to the tunnel offload model.
> The model does not restrict private elements location in a flow rule,
> but the current MLX5 PMD implementation expects that tunnel offload
> rule will begin with PMD specific elements.
> The patch removes that placement limitation.
>
> Cc: stable@dpdk.org
>
> Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
>
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 27+ messages in thread