* [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel
@ 2018-11-02 21:08 Yongseok Koh
2018-11-02 21:08 ` [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct " Yongseok Koh
` (3 more replies)
0 siblings, 4 replies; 17+ messages in thread
From: Yongseok Koh @ 2018-11-02 21:08 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Yongseok Koh, Ori Kam
1) Fix layer parsing
In translation of tunneled flows, dev_flow->layers must not be used to
check tunneled layer as it contains all the layers parsed from
flow_drv_prepare(). Checking tunneled layer is needed to set
IBV_FLOW_SPEC_INNER and it should be based on dynamic parsing. With
dev_flow->layers on a tunneled flow, items will always be interpreted as
inner as dev_flow->layer already has all the items.
2) Refactoring code
It is partly because flow_verbs_translate_item_*() sets layer flag. Same
code is repeating in multiple locations and that could be error-prone.
- Introduce VERBS_SPEC_INNER() to unify setting IBV_FLOW_SPEC_INNER.
- flow_verbs_translate_item_*() doesn't set parsing result -
MLX5_FLOW_LAYER_*.
- flow_verbs_translate_item_*() doesn't set priority or adjust hashfields
but does only item translation. Both have to be done outside.
- Make more consistent between Verbs and DV.
3) Remove flow_verbs_mark_update()
This code can never be reached as validation prohibits specifying mark and
flag actions together. No need to convert flag to mark.
Fixes: 84c406e74524 ("net/mlx5: add flow translate function")
Cc: orika@mellanox.com
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_verbs.c | 568 +++++++++++++++++--------------------
1 file changed, 258 insertions(+), 310 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 2e506b91ad..ab58c04db5 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -33,6 +33,9 @@
#include "mlx5_glue.h"
#include "mlx5_flow.h"
+#define VERBS_SPEC_INNER(item_flags) \
+ (!!((item_flags) & MLX5_FLOW_LAYER_TUNNEL) ? IBV_FLOW_SPEC_INNER : 0)
+
/**
* Create Verbs flow counter with Verbs library.
*
@@ -231,27 +234,26 @@ flow_verbs_counter_query(struct rte_eth_dev *dev __rte_unused,
}
/**
- * Add a verbs item specification into @p flow.
+ * Add a verbs item specification into @p verbs.
*
- * @param[in, out] flow
- * Pointer to flow structure.
+ * @param[out] verbs
+ * Pointer to verbs structure.
* @param[in] src
* Create specification.
* @param[in] size
* Size in bytes of the specification to copy.
*/
static void
-flow_verbs_spec_add(struct mlx5_flow *flow, void *src, unsigned int size)
+flow_verbs_spec_add(struct mlx5_flow_verbs *verbs, void *src, unsigned int size)
{
- struct mlx5_flow_verbs *verbs = &flow->verbs;
+ void *dst;
- if (verbs->specs) {
- void *dst;
-
- dst = (void *)(verbs->specs + verbs->size);
- memcpy(dst, src, size);
- ++verbs->attr->num_of_specs;
- }
+ if (!verbs)
+ return;
+ assert(verbs->specs);
+ dst = (void *)(verbs->specs + verbs->size);
+ memcpy(dst, src, size);
+ ++verbs->attr->num_of_specs;
verbs->size += size;
}
@@ -260,24 +262,23 @@ flow_verbs_spec_add(struct mlx5_flow *flow, void *src, unsigned int size)
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
* @param[in] item_flags
- * Bit field with all detected items.
- * @param[in, out] dev_flow
- * Pointer to dev_flow structure.
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_eth(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_eth *spec = item->spec;
const struct rte_flow_item_eth *mask = item->mask;
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
const unsigned int size = sizeof(struct ibv_flow_spec_eth);
struct ibv_flow_spec_eth eth = {
- .type = IBV_FLOW_SPEC_ETH | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_ETH | VERBS_SPEC_INNER(item_flags),
.size = size,
};
@@ -298,11 +299,8 @@ flow_verbs_translate_item_eth(const struct rte_flow_item *item,
eth.val.src_mac[i] &= eth.mask.src_mac[i];
}
eth.val.ether_type &= eth.mask.ether_type;
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
}
- flow_verbs_spec_add(dev_flow, ð, size);
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
- MLX5_FLOW_LAYER_OUTER_L2;
+ flow_verbs_spec_add(&dev_flow->verbs, ð, size);
}
/**
@@ -344,24 +342,24 @@ flow_verbs_item_vlan_update(struct ibv_flow_attr *attr,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
- * @param[in] item
- * Item specification.
- * @param[in, out] item_flags
- * Bit mask that holds all detected items.
* @param[in, out] dev_flow
* Pointer to dev_flow structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_vlan *spec = item->spec;
const struct rte_flow_item_vlan *mask = item->mask;
unsigned int size = sizeof(struct ibv_flow_spec_eth);
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
struct ibv_flow_spec_eth eth = {
- .type = IBV_FLOW_SPEC_ETH | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_ETH | VERBS_SPEC_INNER(item_flags),
.size = size,
};
const uint32_t l2m = tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
@@ -377,16 +375,10 @@ flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
eth.mask.ether_type = mask->inner_type;
eth.val.ether_type &= eth.mask.ether_type;
}
- if (!(*item_flags & l2m)) {
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- flow_verbs_spec_add(dev_flow, ð, size);
- } else {
+ if (!(item_flags & l2m))
+ flow_verbs_spec_add(&dev_flow->verbs, ð, size);
+ else
flow_verbs_item_vlan_update(dev_flow->verbs.attr, ð);
- size = 0; /* Only an update is done in eth specification. */
- }
- *item_flags |= tunnel ?
- (MLX5_FLOW_LAYER_INNER_L2 | MLX5_FLOW_LAYER_INNER_VLAN) :
- (MLX5_FLOW_LAYER_OUTER_L2 | MLX5_FLOW_LAYER_OUTER_VLAN);
}
/**
@@ -394,32 +386,28 @@ flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_ipv4(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_ipv4 *spec = item->spec;
const struct rte_flow_item_ipv4 *mask = item->mask;
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
unsigned int size = sizeof(struct ibv_flow_spec_ipv4_ext);
struct ibv_flow_spec_ipv4_ext ipv4 = {
- .type = IBV_FLOW_SPEC_IPV4_EXT |
- (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_IPV4_EXT | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
mask = &rte_flow_item_ipv4_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV4;
if (spec) {
ipv4.val = (struct ibv_flow_ipv4_ext_filter){
.src_ip = spec->hdr.src_addr,
@@ -439,12 +427,7 @@ flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
ipv4.val.proto &= ipv4.mask.proto;
ipv4.val.tos &= ipv4.mask.tos;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel,
- MLX5_IPV4_LAYER_TYPES,
- MLX5_IPV4_IBV_RX_HASH);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L3;
- flow_verbs_spec_add(dev_flow, &ipv4, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &ipv4, size);
}
/**
@@ -452,31 +435,28 @@ flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_ipv6(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_ipv6 *spec = item->spec;
const struct rte_flow_item_ipv6 *mask = item->mask;
- const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
unsigned int size = sizeof(struct ibv_flow_spec_ipv6);
struct ibv_flow_spec_ipv6 ipv6 = {
- .type = IBV_FLOW_SPEC_IPV6 | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_IPV6 | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
mask = &rte_flow_item_ipv6_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV6;
if (spec) {
unsigned int i;
uint32_t vtc_flow_val;
@@ -516,12 +496,7 @@ flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
ipv6.val.next_hdr &= ipv6.mask.next_hdr;
ipv6.val.hop_limit &= ipv6.mask.hop_limit;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel,
- MLX5_IPV6_LAYER_TYPES,
- MLX5_IPV6_IBV_RX_HASH);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L3;
- flow_verbs_spec_add(dev_flow, &ipv6, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &ipv6, size);
}
/**
@@ -529,46 +504,38 @@ flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_udp(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_tcp(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
- const struct rte_flow_item_udp *spec = item->spec;
- const struct rte_flow_item_udp *mask = item->mask;
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ const struct rte_flow_item_tcp *spec = item->spec;
+ const struct rte_flow_item_tcp *mask = item->mask;
unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
- struct ibv_flow_spec_tcp_udp udp = {
- .type = IBV_FLOW_SPEC_UDP | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ struct ibv_flow_spec_tcp_udp tcp = {
+ .type = IBV_FLOW_SPEC_TCP | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
- mask = &rte_flow_item_udp_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
- MLX5_FLOW_LAYER_OUTER_L4_UDP;
+ mask = &rte_flow_item_tcp_mask;
if (spec) {
- udp.val.dst_port = spec->hdr.dst_port;
- udp.val.src_port = spec->hdr.src_port;
- udp.mask.dst_port = mask->hdr.dst_port;
- udp.mask.src_port = mask->hdr.src_port;
+ tcp.val.dst_port = spec->hdr.dst_port;
+ tcp.val.src_port = spec->hdr.src_port;
+ tcp.mask.dst_port = mask->hdr.dst_port;
+ tcp.mask.src_port = mask->hdr.src_port;
/* Remove unwanted bits from values. */
- udp.val.src_port &= udp.mask.src_port;
- udp.val.dst_port &= udp.mask.dst_port;
+ tcp.val.src_port &= tcp.mask.src_port;
+ tcp.val.dst_port &= tcp.mask.dst_port;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_UDP,
- (IBV_RX_HASH_SRC_PORT_UDP |
- IBV_RX_HASH_DST_PORT_UDP));
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L4;
- flow_verbs_spec_add(dev_flow, &udp, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &tcp, size);
}
/**
@@ -576,46 +543,38 @@ flow_verbs_translate_item_udp(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_tcp(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_udp(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
- const struct rte_flow_item_tcp *spec = item->spec;
- const struct rte_flow_item_tcp *mask = item->mask;
- const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
+ const struct rte_flow_item_udp *spec = item->spec;
+ const struct rte_flow_item_udp *mask = item->mask;
unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
- struct ibv_flow_spec_tcp_udp tcp = {
- .type = IBV_FLOW_SPEC_TCP | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ struct ibv_flow_spec_tcp_udp udp = {
+ .type = IBV_FLOW_SPEC_UDP | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
- mask = &rte_flow_item_tcp_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
- MLX5_FLOW_LAYER_OUTER_L4_TCP;
+ mask = &rte_flow_item_udp_mask;
if (spec) {
- tcp.val.dst_port = spec->hdr.dst_port;
- tcp.val.src_port = spec->hdr.src_port;
- tcp.mask.dst_port = mask->hdr.dst_port;
- tcp.mask.src_port = mask->hdr.src_port;
+ udp.val.dst_port = spec->hdr.dst_port;
+ udp.val.src_port = spec->hdr.src_port;
+ udp.mask.dst_port = mask->hdr.dst_port;
+ udp.mask.src_port = mask->hdr.src_port;
/* Remove unwanted bits from values. */
- tcp.val.src_port &= tcp.mask.src_port;
- tcp.val.dst_port &= tcp.mask.dst_port;
+ udp.val.src_port &= udp.mask.src_port;
+ udp.val.dst_port &= udp.mask.dst_port;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_TCP,
- (IBV_RX_HASH_SRC_PORT_TCP |
- IBV_RX_HASH_DST_PORT_TCP));
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L4;
- flow_verbs_spec_add(dev_flow, &tcp, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &udp, size);
}
/**
@@ -623,17 +582,17 @@ flow_verbs_translate_item_tcp(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
const struct rte_flow_item_vxlan *spec = item->spec;
const struct rte_flow_item_vxlan *mask = item->mask;
@@ -657,9 +616,7 @@ flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
/* Remove unwanted bits from values. */
vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
}
- flow_verbs_spec_add(dev_flow, &vxlan, size);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- *item_flags |= MLX5_FLOW_LAYER_VXLAN;
+ flow_verbs_spec_add(&dev_flow->verbs, &vxlan, size);
}
/**
@@ -667,17 +624,17 @@ flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_vxlan_gpe(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
const struct rte_flow_item_vxlan_gpe *spec = item->spec;
const struct rte_flow_item_vxlan_gpe *mask = item->mask;
@@ -701,9 +658,7 @@ flow_verbs_translate_item_vxlan_gpe(const struct rte_flow_item *item,
/* Remove unwanted bits from values. */
vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
}
- flow_verbs_spec_add(dev_flow, &vxlan_gpe, size);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- *item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
+ flow_verbs_spec_add(&dev_flow->verbs, &vxlan_gpe, size);
}
/**
@@ -763,17 +718,17 @@ flow_verbs_item_gre_ip_protocol_update(struct ibv_flow_attr *attr,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item __rte_unused,
+ uint64_t item_flags)
{
struct mlx5_flow_verbs *verbs = &dev_flow->verbs;
#ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
@@ -804,7 +759,7 @@ flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
tunnel.val.key &= tunnel.mask.key;
}
#endif
- if (*item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4)
+ if (item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4)
flow_verbs_item_gre_ip_protocol_update(verbs->attr,
IBV_FLOW_SPEC_IPV4_EXT,
IPPROTO_GRE);
@@ -812,9 +767,7 @@ flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
flow_verbs_item_gre_ip_protocol_update(verbs->attr,
IBV_FLOW_SPEC_IPV6,
IPPROTO_GRE);
- flow_verbs_spec_add(dev_flow, &tunnel, size);
- verbs->attr->priority = MLX5_PRIORITY_MAP_L2;
- *item_flags |= MLX5_FLOW_LAYER_GRE;
+ flow_verbs_spec_add(verbs, &tunnel, size);
}
/**
@@ -822,17 +775,17 @@ flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
* the input is valid and that there is space to insert the requested action
* into the flow. This function also return the action that was added.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_mpls(const struct rte_flow_item *item __rte_unused,
- uint64_t *action_flags __rte_unused,
- struct mlx5_flow *dev_flow __rte_unused)
+flow_verbs_translate_item_mpls(struct mlx5_flow *dev_flow __rte_unused,
+ const struct rte_flow_item *item __rte_unused,
+ uint64_t item_flags __rte_unused)
{
#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
const struct rte_flow_item_mpls *spec = item->spec;
@@ -851,25 +804,24 @@ flow_verbs_translate_item_mpls(const struct rte_flow_item *item __rte_unused,
/* Remove unwanted bits from values. */
mpls.val.label &= mpls.mask.label;
}
- flow_verbs_spec_add(dev_flow, &mpls, size);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- *action_flags |= MLX5_FLOW_LAYER_MPLS;
+ flow_verbs_spec_add(&dev_flow->verbs, &mpls, size);
#endif
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
-flow_verbs_translate_action_drop(uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_drop
+ (struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action __rte_unused)
{
unsigned int size = sizeof(struct ibv_flow_spec_action_drop);
struct ibv_flow_spec_action_drop drop = {
@@ -877,26 +829,22 @@ flow_verbs_translate_action_drop(uint64_t *action_flags,
.size = size,
};
- flow_verbs_spec_add(dev_flow, &drop, size);
- *action_flags |= MLX5_FLOW_ACTION_DROP;
+ flow_verbs_spec_add(&dev_flow->verbs, &drop, size);
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in] action
- * Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
-flow_verbs_translate_action_queue(const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_queue(struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action)
{
const struct rte_flow_action_queue *queue = action->conf;
struct rte_flow *flow = dev_flow->flow;
@@ -904,13 +852,12 @@ flow_verbs_translate_action_queue(const struct rte_flow_action *action,
if (flow->queue)
(*flow->queue)[0] = queue->index;
flow->rss.queue_num = 1;
- *action_flags |= MLX5_FLOW_ACTION_QUEUE;
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
* @param[in] action
* Action configuration.
@@ -920,9 +867,8 @@ flow_verbs_translate_action_queue(const struct rte_flow_action *action,
* Pointer to mlx5_flow.
*/
static void
-flow_verbs_translate_action_rss(const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_rss(struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action)
{
const struct rte_flow_action_rss *rss = action->conf;
struct rte_flow *flow = dev_flow->flow;
@@ -934,26 +880,22 @@ flow_verbs_translate_action_rss(const struct rte_flow_action *action,
memcpy(flow->key, rss->key, MLX5_RSS_HASH_KEY_LEN);
flow->rss.types = rss->types;
flow->rss.level = rss->level;
- *action_flags |= MLX5_FLOW_ACTION_RSS;
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in] action
- * Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
flow_verbs_translate_action_flag
- (const struct rte_flow_action *action __rte_unused,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+ (struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action __rte_unused)
{
unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
struct ibv_flow_spec_action_tag tag = {
@@ -961,87 +903,44 @@ flow_verbs_translate_action_flag
.size = size,
.tag_id = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT),
};
- *action_flags |= MLX5_FLOW_ACTION_MARK;
- flow_verbs_spec_add(dev_flow, &tag, size);
-}
-/**
- * Update verbs specification to modify the flag to mark.
- *
- * @param[in, out] verbs
- * Pointer to the mlx5_flow_verbs structure.
- * @param[in] mark_id
- * Mark identifier to replace the flag.
- */
-static void
-flow_verbs_mark_update(struct mlx5_flow_verbs *verbs, uint32_t mark_id)
-{
- struct ibv_spec_header *hdr;
- int i;
-
- if (!verbs)
- return;
- /* Update Verbs specification. */
- hdr = (struct ibv_spec_header *)verbs->specs;
- if (!hdr)
- return;
- for (i = 0; i != verbs->attr->num_of_specs; ++i) {
- if (hdr->type == IBV_FLOW_SPEC_ACTION_TAG) {
- struct ibv_flow_spec_action_tag *t =
- (struct ibv_flow_spec_action_tag *)hdr;
-
- t->tag_id = mlx5_flow_mark_set(mark_id);
- }
- hdr = (struct ibv_spec_header *)((uintptr_t)hdr + hdr->size);
- }
+ flow_verbs_spec_add(&dev_flow->verbs, &tag, size);
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in] action
- * Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
-flow_verbs_translate_action_mark(const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_mark(struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action)
{
const struct rte_flow_action_mark *mark = action->conf;
unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
struct ibv_flow_spec_action_tag tag = {
.type = IBV_FLOW_SPEC_ACTION_TAG,
.size = size,
+ .tag_id = mlx5_flow_mark_set(mark->id),
};
- struct mlx5_flow_verbs *verbs = &dev_flow->verbs;
- if (*action_flags & MLX5_FLOW_ACTION_FLAG) {
- flow_verbs_mark_update(verbs, mark->id);
- size = 0;
- } else {
- tag.tag_id = mlx5_flow_mark_set(mark->id);
- flow_verbs_spec_add(dev_flow, &tag, size);
- }
- *action_flags |= MLX5_FLOW_ACTION_MARK;
+ flow_verbs_spec_add(&dev_flow->verbs, &tag, size);
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
* @param[in] dev
* Pointer to the Ethernet device structure.
* @param[in] action
* Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
* @param[out] error
@@ -1051,10 +950,9 @@ flow_verbs_translate_action_mark(const struct rte_flow_action *action,
* 0 On success else a negative errno value is returned and rte_errno is set.
*/
static int
-flow_verbs_translate_action_count(struct rte_eth_dev *dev,
+flow_verbs_translate_action_count(struct mlx5_flow *dev_flow,
const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow,
+ struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
const struct rte_flow_action_count *count = action->conf;
@@ -1078,13 +976,12 @@ flow_verbs_translate_action_count(struct rte_eth_dev *dev,
"cannot get counter"
" context.");
}
- *action_flags |= MLX5_FLOW_ACTION_COUNT;
#if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
counter.counter_set_handle = flow->counter->cs->handle;
- flow_verbs_spec_add(dev_flow, &counter, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
#elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
counter.counters = flow->counter->cs;
- flow_verbs_spec_add(dev_flow, &counter, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
#endif
return 0;
}
@@ -1116,7 +1013,6 @@ flow_verbs_validate(struct rte_eth_dev *dev,
int ret;
uint64_t action_flags = 0;
uint64_t item_flags = 0;
- int tunnel = 0;
uint8_t next_protocol = 0xff;
if (items == NULL)
@@ -1125,9 +1021,9 @@ flow_verbs_validate(struct rte_eth_dev *dev,
if (ret < 0)
return ret;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
int ret = 0;
- tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_VOID:
break;
@@ -1144,8 +1040,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
error);
if (ret < 0)
return ret;
- item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_VLAN :
- MLX5_FLOW_LAYER_OUTER_VLAN;
+ item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
ret = mlx5_flow_validate_item_ipv4(items, item_flags,
@@ -1395,8 +1293,11 @@ flow_verbs_get_items_and_size(const struct rte_flow_item items[],
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
size += sizeof(struct ibv_flow_spec_eth);
- detected_items |= tunnel ? MLX5_FLOW_LAYER_INNER_VLAN :
- MLX5_FLOW_LAYER_OUTER_VLAN;
+ detected_items |=
+ tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
size += sizeof(struct ibv_flow_spec_ipv4_ext);
@@ -1528,50 +1429,48 @@ flow_verbs_translate(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- uint64_t action_flags = 0;
+ struct rte_flow *flow = dev_flow->flow;
uint64_t item_flags = 0;
+ uint64_t action_flags = 0;
uint64_t priority = attr->priority;
+ uint32_t subpriority = 0;
struct priv *priv = dev->data->dev_private;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
int ret;
+
switch (actions->type) {
case RTE_FLOW_ACTION_TYPE_VOID:
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
- flow_verbs_translate_action_flag(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_flag(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
- flow_verbs_translate_action_mark(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_mark(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
- flow_verbs_translate_action_drop(&action_flags,
- dev_flow);
+ flow_verbs_translate_action_drop(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
- flow_verbs_translate_action_queue(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_queue(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- flow_verbs_translate_action_rss(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_rss(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
- ret = flow_verbs_translate_action_count(dev,
+ ret = flow_verbs_translate_action_count(dev_flow,
actions,
- &action_flags,
- dev_flow,
- error);
+ dev, error);
if (ret < 0)
return ret;
+ action_flags |= MLX5_FLOW_ACTION_COUNT;
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -1580,51 +1479,100 @@ flow_verbs_translate(struct rte_eth_dev *dev,
"action not supported");
}
}
- /* Device flow should have action flags by flow_drv_prepare(). */
- assert(dev_flow->flow->actions == action_flags);
+ flow->actions = action_flags;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_VOID:
break;
case RTE_FLOW_ITEM_TYPE_ETH:
- flow_verbs_translate_item_eth(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_eth(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
+ MLX5_FLOW_LAYER_OUTER_L2;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
- flow_verbs_translate_item_vlan(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_vlan(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
- flow_verbs_translate_item_ipv4(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_ipv4(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV4_LAYER_TYPES,
+ MLX5_IPV4_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV4;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
- flow_verbs_translate_item_ipv6(items, &item_flags,
- dev_flow);
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- flow_verbs_translate_item_udp(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_ipv6(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV6_LAYER_TYPES,
+ MLX5_IPV6_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV6;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
- flow_verbs_translate_item_tcp(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_tcp(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_TCP,
+ (IBV_RX_HASH_SRC_PORT_TCP |
+ IBV_RX_HASH_DST_PORT_TCP));
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
+ MLX5_FLOW_LAYER_OUTER_L4_TCP;
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ flow_verbs_translate_item_udp(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_UDP,
+ (IBV_RX_HASH_SRC_PORT_UDP |
+ IBV_RX_HASH_DST_PORT_UDP));
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
+ MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
- flow_verbs_translate_item_vxlan(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_vxlan(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_VXLAN;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- flow_verbs_translate_item_vxlan_gpe(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_vxlan_gpe(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
break;
case RTE_FLOW_ITEM_TYPE_GRE:
- flow_verbs_translate_item_gre(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_gre(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_GRE;
break;
case RTE_FLOW_ITEM_TYPE_MPLS:
- flow_verbs_translate_item_mpls(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_mpls(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_MPLS;
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -1633,9 +1581,9 @@ flow_verbs_translate(struct rte_eth_dev *dev,
"item not supported");
}
}
+ dev_flow->layers = item_flags;
dev_flow->verbs.attr->priority =
- mlx5_flow_adjust_priority(dev, priority,
- dev_flow->verbs.attr->priority);
+ mlx5_flow_adjust_priority(dev, priority, subpriority);
return 0;
}
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-02 21:08 [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
@ 2018-11-02 21:08 ` Yongseok Koh
2018-11-04 8:22 ` Ori Kam
2018-11-02 21:08 ` [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
` (2 subsequent siblings)
3 siblings, 1 reply; 17+ messages in thread
From: Yongseok Koh @ 2018-11-02 21:08 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Yongseok Koh, Ori Kam
1) Fix layer parsing
In translation of tunneled flows, dev_flow->layers must not be used to
check tunneled layer as it contains all the layers parsed from
flow_drv_prepare(). Checking tunneled layer is needed to distinguish
between outer and inner item. This should be based on dynamic parsing. With
dev_flow->layers on a tunneled flow, items will always be interpreted as
inner as dev_flow->layer already has all the items. Dynamic parsing
(item_flags) is added as there's no such code.
2) Refactoring code
- flow_dv_create_item() and flow_dv_create_action() are merged into
flow_dv_translate() for consistency with Verbs and *_validate().
Fixes: 246636411536 ("net/mlx5: fix flow tunnel handling")
Fixes: d02cb0691299 ("net/mlx5: add Direct Verbs translate actions")
Fixes: fc2c498ccb94 ("net/mlx5: add Direct Verbs translate items")
Cc: orika@mellanox.com
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 485 +++++++++++++++++++---------------------
1 file changed, 232 insertions(+), 253 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index c11ecd4c1f..44e2a920eb 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1602,248 +1602,6 @@ flow_dv_translate_item_meta(void *matcher, void *key,
}
}
-/**
- * Update the matcher and the value based the selected item.
- *
- * @param[in, out] matcher
- * Flow matcher.
- * @param[in, out] key
- * Flow matcher value.
- * @param[in] item
- * Flow pattern to translate.
- * @param[in, out] dev_flow
- * Pointer to the mlx5_flow.
- * @param[in] inner
- * Item is inner pattern.
- */
-static void
-flow_dv_create_item(void *matcher, void *key,
- const struct rte_flow_item *item,
- struct mlx5_flow *dev_flow,
- int inner)
-{
- struct mlx5_flow_dv_matcher *tmatcher = matcher;
-
- switch (item->type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- flow_dv_translate_item_eth(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L2;
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- flow_dv_translate_item_vlan(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- flow_dv_translate_item_ipv4(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- MLX5_IPV4_LAYER_TYPES,
- MLX5_IPV4_IBV_RX_HASH);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- flow_dv_translate_item_ipv6(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- MLX5_IPV6_LAYER_TYPES,
- MLX5_IPV6_IBV_RX_HASH);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- flow_dv_translate_item_tcp(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- ETH_RSS_TCP,
- (IBV_RX_HASH_SRC_PORT_TCP |
- IBV_RX_HASH_DST_PORT_TCP));
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- flow_dv_translate_item_udp(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- ETH_RSS_UDP,
- (IBV_RX_HASH_SRC_PORT_UDP |
- IBV_RX_HASH_DST_PORT_UDP));
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- flow_dv_translate_item_gre(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- flow_dv_translate_item_nvgre(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- flow_dv_translate_item_vxlan(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_META:
- flow_dv_translate_item_meta(tmatcher->mask.buf, key, item);
- break;
- default:
- break;
- }
-}
-
-/**
- * Store the requested actions in an array.
- *
- * @param[in] dev
- * Pointer to rte_eth_dev structure.
- * @param[in] action
- * Flow action to translate.
- * @param[in, out] dev_flow
- * Pointer to the mlx5_flow.
- * @param[in] attr
- * Pointer to the flow attributes.
- * @param[out] error
- * Pointer to the error structure.
- *
- * @return
- * 0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-flow_dv_create_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *action,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- struct rte_flow_error *error)
-{
- const struct rte_flow_action_queue *queue;
- const struct rte_flow_action_rss *rss;
- int actions_n = dev_flow->dv.actions_n;
- struct rte_flow *flow = dev_flow->flow;
- const struct rte_flow_action *action_ptr = action;
-
- switch (action->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_FLAG:
- dev_flow->dv.actions[actions_n].type = MLX5DV_FLOW_ACTION_TAG;
- dev_flow->dv.actions[actions_n].tag_value =
- mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
- actions_n++;
- flow->actions |= MLX5_FLOW_ACTION_FLAG;
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- dev_flow->dv.actions[actions_n].type = MLX5DV_FLOW_ACTION_TAG;
- dev_flow->dv.actions[actions_n].tag_value =
- mlx5_flow_mark_set
- (((const struct rte_flow_action_mark *)
- (action->conf))->id);
- flow->actions |= MLX5_FLOW_ACTION_MARK;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_DROP:
- dev_flow->dv.actions[actions_n].type = MLX5DV_FLOW_ACTION_DROP;
- flow->actions |= MLX5_FLOW_ACTION_DROP;
- break;
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- queue = action->conf;
- flow->rss.queue_num = 1;
- (*flow->queue)[0] = queue->index;
- flow->actions |= MLX5_FLOW_ACTION_QUEUE;
- break;
- case RTE_FLOW_ACTION_TYPE_RSS:
- rss = action->conf;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
- rss->queue_num * sizeof(uint16_t));
- flow->rss.queue_num = rss->queue_num;
- memcpy(flow->key, rss->key, MLX5_RSS_HASH_KEY_LEN);
- flow->rss.types = rss->types;
- flow->rss.level = rss->level;
- /* Added to array only in apply since we need the QP */
- flow->actions |= MLX5_FLOW_ACTION_RSS;
- break;
- case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
- case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
- if (flow_dv_create_action_l2_encap(dev, action,
- dev_flow, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- flow->actions |= action->type ==
- RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
- MLX5_FLOW_ACTION_VXLAN_ENCAP :
- MLX5_FLOW_ACTION_NVGRE_ENCAP;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
- case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
- if (flow_dv_create_action_l2_decap(dev, dev_flow, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- flow->actions |= action->type ==
- RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
- MLX5_FLOW_ACTION_VXLAN_DECAP :
- MLX5_FLOW_ACTION_NVGRE_DECAP;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
- /* Handle encap action with preceding decap */
- if (flow->actions & MLX5_FLOW_ACTION_RAW_DECAP) {
- if (flow_dv_create_action_raw_encap(dev, action,
- dev_flow,
- attr, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- } else {
- /* Handle encap action without preceding decap */
- if (flow_dv_create_action_l2_encap(dev, action,
- dev_flow, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- }
- flow->actions |= MLX5_FLOW_ACTION_RAW_ENCAP;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
- /* Check if this decap action is followed by encap. */
- for (; action_ptr->type != RTE_FLOW_ACTION_TYPE_END &&
- action_ptr->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
- action_ptr++) {
- }
- /* Handle decap action only if it isn't followed by encap */
- if (action_ptr->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
- if (flow_dv_create_action_l2_decap(dev, dev_flow,
- error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- actions_n++;
- }
- /* If decap is followed by encap, handle it at encap case. */
- flow->actions |= MLX5_FLOW_ACTION_RAW_DECAP;
- break;
- default:
- break;
- }
- dev_flow->dv.actions_n = actions_n;
- return 0;
-}
-
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -1985,34 +1743,255 @@ flow_dv_translate(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct priv *priv = dev->data->dev_private;
+ struct rte_flow *flow = dev_flow->flow;
+ uint64_t item_flags = 0;
+ uint64_t action_flags = 0;
uint64_t priority = attr->priority;
struct mlx5_flow_dv_matcher matcher = {
.mask = {
.size = sizeof(matcher.mask.buf),
},
};
- void *match_value = dev_flow->dv.value.buf;
- int tunnel = 0;
+ int actions_n = 0;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
- tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
- flow_dv_create_item(&matcher, match_value, items, dev_flow,
- tunnel);
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ void *match_mask = matcher.mask.buf;
+ void *match_value = dev_flow->dv.value.buf;
+
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ flow_dv_translate_item_eth(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
+ MLX5_FLOW_LAYER_OUTER_L2;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ flow_dv_translate_item_vlan(match_mask, match_value,
+ items, tunnel);
+ item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ flow_dv_translate_item_ipv4(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->dv.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV4_LAYER_TYPES,
+ MLX5_IPV4_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV4;
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ flow_dv_translate_item_ipv6(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->dv.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV6_LAYER_TYPES,
+ MLX5_IPV6_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV6;
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ flow_dv_translate_item_tcp(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->dv.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_TCP,
+ IBV_RX_HASH_SRC_PORT_TCP |
+ IBV_RX_HASH_DST_PORT_TCP);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
+ MLX5_FLOW_LAYER_OUTER_L4_TCP;
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ flow_dv_translate_item_udp(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_UDP,
+ IBV_RX_HASH_SRC_PORT_UDP |
+ IBV_RX_HASH_DST_PORT_UDP);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
+ MLX5_FLOW_LAYER_OUTER_L4_UDP;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ flow_dv_translate_item_gre(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_GRE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_NVGRE:
+ flow_dv_translate_item_nvgre(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_GRE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ flow_dv_translate_item_vxlan(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_VXLAN;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+ flow_dv_translate_item_vxlan(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_META:
+ flow_dv_translate_item_meta(match_mask, match_value,
+ items);
+ item_flags |= MLX5_FLOW_ITEM_METADATA;
+ break;
+ default:
+ break;
+ }
}
+ dev_flow->layers = item_flags;
+ /* Register matcher. */
matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf,
- matcher.mask.size);
- if (priority == MLX5_FLOW_PRIO_RSVD)
- priority = priv->config.flow_prio - 1;
+ matcher.mask.size);
matcher.priority = mlx5_flow_adjust_priority(dev, priority,
matcher.priority);
matcher.egress = attr->egress;
if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
return -rte_errno;
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++)
- if (flow_dv_create_action(dev, actions, dev_flow, attr, error))
- return -rte_errno;
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ const struct rte_flow_action_queue *queue;
+ const struct rte_flow_action_rss *rss;
+ const struct rte_flow_action *action = actions;
+
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_VOID:
+ break;
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_TAG;
+ dev_flow->dv.actions[actions_n].tag_value =
+ mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
+ actions_n++;
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_TAG;
+ dev_flow->dv.actions[actions_n].tag_value =
+ mlx5_flow_mark_set
+ (((const struct rte_flow_action_mark *)
+ (actions->conf))->id);
+ actions_n++;
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_DROP;
+ action_flags |= MLX5_FLOW_ACTION_DROP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ queue = actions->conf;
+ flow->rss.queue_num = 1;
+ (*flow->queue)[0] = queue->index;
+ action_flags |= MLX5_FLOW_ACTION_QUEUE;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ rss = actions->conf;
+ if (flow->queue)
+ memcpy((*flow->queue), rss->queue,
+ rss->queue_num * sizeof(uint16_t));
+ flow->rss.queue_num = rss->queue_num;
+ memcpy(flow->key, rss->key, MLX5_RSS_HASH_KEY_LEN);
+ flow->rss.types = rss->types;
+ flow->rss.level = rss->level;
+ action_flags |= MLX5_FLOW_ACTION_RSS;
+ break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
+ if (flow_dv_create_action_l2_encap(dev, actions,
+ dev_flow, error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ actions_n++;
+ action_flags |= actions->type ==
+ RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
+ MLX5_FLOW_ACTION_VXLAN_ENCAP :
+ MLX5_FLOW_ACTION_NVGRE_ENCAP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
+ if (flow_dv_create_action_l2_decap(dev, dev_flow,
+ error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ actions_n++;
+ action_flags |= actions->type ==
+ RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
+ MLX5_FLOW_ACTION_VXLAN_DECAP :
+ MLX5_FLOW_ACTION_NVGRE_DECAP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ /* Handle encap with preceding decap. */
+ if (action_flags & MLX5_FLOW_ACTION_RAW_DECAP) {
+ if (flow_dv_create_action_raw_encap
+ (dev, actions, dev_flow, attr, error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ } else {
+ /* Handle encap without preceding decap. */
+ if (flow_dv_create_action_l2_encap(dev, actions,
+ dev_flow,
+ error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ }
+ actions_n++;
+ action_flags |= MLX5_FLOW_ACTION_RAW_ENCAP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ /* Check if this decap is followed by encap. */
+ for (; action->type != RTE_FLOW_ACTION_TYPE_END &&
+ action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
+ action++) {
+ }
+ /* Handle decap only if it isn't followed by encap. */
+ if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
+ if (flow_dv_create_action_l2_decap(dev,
+ dev_flow,
+ error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ actions_n++;
+ }
+ /* If decap is followed by encap, handle it at encap. */
+ action_flags |= MLX5_FLOW_ACTION_RAW_DECAP;
+ break;
+ default:
+ break;
+ }
+ }
+ dev_flow->dv.actions_n = actions_n;
+ flow->actions = action_flags;
return 0;
}
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation
2018-11-02 21:08 [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
2018-11-02 21:08 ` [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct " Yongseok Koh
@ 2018-11-02 21:08 ` Yongseok Koh
2018-11-04 8:29 ` Ori Kam
2018-11-04 8:17 ` [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Ori Kam
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
3 siblings, 1 reply; 17+ messages in thread
From: Yongseok Koh @ 2018-11-02 21:08 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Yongseok Koh
Even though flow_drv_prepare() takes item_flags and action_flags to be
filled in, those are not used and will be overwritten by parsing of
flow_drv_translate(). There's no reason to keep the flags and fill it.
Appropriate notes are added to the documentation of flow_drv_prepare() and
flow_drv_translate().
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 38 ++++++++++++--------------
drivers/net/mlx5/mlx5_flow.h | 3 +--
drivers/net/mlx5/mlx5_flow_dv.c | 6 -----
drivers/net/mlx5/mlx5_flow_tcf.c | 55 +++++---------------------------------
drivers/net/mlx5/mlx5_flow_verbs.c | 52 +++--------------------------------
5 files changed, 29 insertions(+), 125 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 107a4f02f8..fae3bc92dd 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1657,8 +1657,6 @@ static struct mlx5_flow *
flow_null_prepare(const struct rte_flow_attr *attr __rte_unused,
const struct rte_flow_item items[] __rte_unused,
const struct rte_flow_action actions[] __rte_unused,
- uint64_t *item_flags __rte_unused,
- uint64_t *action_flags __rte_unused,
struct rte_flow_error *error __rte_unused)
{
rte_errno = ENOTSUP;
@@ -1786,16 +1784,19 @@ flow_drv_validate(struct rte_eth_dev *dev,
* calculates the size of memory required for device flow, allocates the memory,
* initializes the device flow and returns the pointer.
*
+ * @note
+ * This function initializes device flow structure such as dv, tcf or verbs in
+ * struct mlx5_flow. However, it is callee's responsibility to initialize the
+ * rest. For example, adding returning device flow to flow->dev_flow list and
+ * setting backward reference to the flow should be done out of this function.
+ * layers field is not filled either.
+ *
* @param[in] attr
* Pointer to the flow attributes.
* @param[in] items
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1803,12 +1804,10 @@ flow_drv_validate(struct rte_eth_dev *dev,
* Pointer to device flow on success, otherwise NULL and rte_ernno is set.
*/
static inline struct mlx5_flow *
-flow_drv_prepare(struct rte_flow *flow,
+flow_drv_prepare(const struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
const struct rte_flow_action actions[],
- uint64_t *item_flags,
- uint64_t *action_flags,
struct rte_flow_error *error)
{
const struct mlx5_flow_driver_ops *fops;
@@ -1816,8 +1815,7 @@ flow_drv_prepare(struct rte_flow *flow,
assert(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX);
fops = flow_get_drv_ops(type);
- return fops->prepare(attr, items, actions, item_flags, action_flags,
- error);
+ return fops->prepare(attr, items, actions, error);
}
/**
@@ -1826,6 +1824,12 @@ flow_drv_prepare(struct rte_flow *flow,
* translates a generic flow into a driver flow. flow_drv_prepare() must
* precede.
*
+ * @note
+ * dev_flow->layers could be filled as a result of parsing during translation
+ * if needed by flow_drv_apply(). dev_flow->flow->actions can also be filled
+ * if necessary. As a flow can have multiple dev_flows by RSS flow expansion,
+ * flow->actions could be overwritten even though all the expanded dev_flows
+ * have the same actions.
*
* @param[in] dev
* Pointer to the rte dev structure.
@@ -1889,7 +1893,7 @@ flow_drv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
* Flow driver remove API. This abstracts calling driver specific functions.
* Parent flow (rte_flow) should have driver type (drv_type). It removes a flow
* on device. All the resources of the flow should be freed by calling
- * flow_dv_destroy().
+ * flow_drv_destroy().
*
* @param[in] dev
* Pointer to Ethernet device.
@@ -2020,8 +2024,6 @@ flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
{
struct rte_flow *flow = NULL;
struct mlx5_flow *dev_flow;
- uint64_t action_flags = 0;
- uint64_t item_flags = 0;
const struct rte_flow_action_rss *rss;
union {
struct rte_flow_expand_rss buf;
@@ -2064,16 +2066,10 @@ flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
}
for (i = 0; i < buf->entries; ++i) {
dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
- actions, &item_flags, &action_flags,
- error);
+ actions, error);
if (!dev_flow)
goto error;
dev_flow->flow = flow;
- dev_flow->layers = item_flags;
- /* Store actions once as expanded flows have same actions. */
- if (i == 0)
- flow->actions = action_flags;
- assert(flow->actions == action_flags);
LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
ret = flow_drv_translate(dev, dev_flow, attr,
buf->entry[i].pattern,
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index fadde552c2..f976bff427 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -293,8 +293,7 @@ typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
struct rte_flow_error *error);
typedef struct mlx5_flow *(*mlx5_flow_prepare_t)
(const struct rte_flow_attr *attr, const struct rte_flow_item items[],
- const struct rte_flow_action actions[], uint64_t *item_flags,
- uint64_t *action_flags, struct rte_flow_error *error);
+ const struct rte_flow_action actions[], struct rte_flow_error *error);
typedef int (*mlx5_flow_translate_t)(struct rte_eth_dev *dev,
struct mlx5_flow *dev_flow,
const struct rte_flow_attr *attr,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 44e2a920eb..0fb791eafa 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1014,10 +1014,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1029,8 +1025,6 @@ static struct mlx5_flow *
flow_dv_prepare(const struct rte_flow_attr *attr __rte_unused,
const struct rte_flow_item items[] __rte_unused,
const struct rte_flow_action actions[] __rte_unused,
- uint64_t *item_flags __rte_unused,
- uint64_t *action_flags __rte_unused,
struct rte_flow_error *error)
{
uint32_t size = sizeof(struct mlx5_flow);
diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flow_tcf.c
index 719fb10632..483e490843 100644
--- a/drivers/net/mlx5/mlx5_flow_tcf.c
+++ b/drivers/net/mlx5/mlx5_flow_tcf.c
@@ -664,20 +664,15 @@ flow_tcf_create_pedit_mnl_msg(struct nlmsghdr *nl,
*
* @param[in,out] actions
* actions specification.
- * @param[in,out] action_flags
- * actions flags
- * @param[in,out] size
- * accumulated size
+ *
* @return
* Max memory size of one TC-pedit action
*/
static int
-flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions,
- uint64_t *action_flags)
+flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions)
{
int pedit_size = 0;
int keys = 0;
- uint64_t flags = 0;
pedit_size += SZ_NLATTR_NEST + /* na_act_index. */
SZ_NLATTR_STRZ_OF("pedit") +
@@ -686,45 +681,35 @@ flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions,
switch ((*actions)->type) {
case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
keys += NUM_OF_PEDIT_KEYS(IPV4_ADDR_LEN);
- flags |= MLX5_FLOW_ACTION_SET_IPV4_SRC;
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST:
keys += NUM_OF_PEDIT_KEYS(IPV4_ADDR_LEN);
- flags |= MLX5_FLOW_ACTION_SET_IPV4_DST;
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC:
keys += NUM_OF_PEDIT_KEYS(IPV6_ADDR_LEN);
- flags |= MLX5_FLOW_ACTION_SET_IPV6_SRC;
break;
case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST:
keys += NUM_OF_PEDIT_KEYS(IPV6_ADDR_LEN);
- flags |= MLX5_FLOW_ACTION_SET_IPV6_DST;
break;
case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
/* TCP is as same as UDP */
keys += NUM_OF_PEDIT_KEYS(TP_PORT_LEN);
- flags |= MLX5_FLOW_ACTION_SET_TP_SRC;
break;
case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
/* TCP is as same as UDP */
keys += NUM_OF_PEDIT_KEYS(TP_PORT_LEN);
- flags |= MLX5_FLOW_ACTION_SET_TP_DST;
break;
case RTE_FLOW_ACTION_TYPE_SET_TTL:
keys += NUM_OF_PEDIT_KEYS(TTL_LEN);
- flags |= MLX5_FLOW_ACTION_SET_TTL;
break;
case RTE_FLOW_ACTION_TYPE_DEC_TTL:
keys += NUM_OF_PEDIT_KEYS(TTL_LEN);
- flags |= MLX5_FLOW_ACTION_DEC_TTL;
break;
case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
keys += NUM_OF_PEDIT_KEYS(ETHER_ADDR_LEN);
- flags |= MLX5_FLOW_ACTION_SET_MAC_SRC;
break;
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
keys += NUM_OF_PEDIT_KEYS(ETHER_ADDR_LEN);
- flags |= MLX5_FLOW_ACTION_SET_MAC_DST;
break;
default:
goto get_pedit_action_size_done;
@@ -740,7 +725,6 @@ flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions,
/* TCA_PEDIT_KEY_EX + HTYPE + CMD */
(SZ_NLATTR_NEST + SZ_NLATTR_DATA_OF(2) +
SZ_NLATTR_DATA_OF(2));
- (*action_flags) |= flags;
(*actions)--;
return pedit_size;
}
@@ -1415,11 +1399,9 @@ flow_tcf_validate(struct rte_eth_dev *dev,
*/
static int
flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- uint64_t *item_flags)
+ const struct rte_flow_item items[])
{
int size = 0;
- uint64_t flags = 0;
size += SZ_NLATTR_STRZ_OF("flower") +
SZ_NLATTR_NEST + /* TCA_OPTIONS. */
@@ -1436,7 +1418,6 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
SZ_NLATTR_DATA_OF(ETHER_ADDR_LEN) * 4;
/* dst/src MAC addr and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L2;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
@@ -1444,33 +1425,28 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
/* VLAN Ether type. */
SZ_NLATTR_TYPE_OF(uint8_t) + /* VLAN prio. */
SZ_NLATTR_TYPE_OF(uint16_t); /* VLAN ID. */
- flags |= MLX5_FLOW_LAYER_OUTER_VLAN;
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(uint32_t) * 4;
/* dst/src IP addr and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(IPV6_ADDR_LEN) * 4;
/* dst/src IP addr and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(uint16_t) * 4;
/* dst/src port and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(uint16_t) * 4;
/* dst/src port and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L4_TCP;
break;
default:
DRV_LOG(WARNING,
@@ -1480,7 +1456,6 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
break;
}
}
- *item_flags = flags;
return size;
}
@@ -1497,11 +1472,9 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
* Maximum size of memory for actions.
*/
static int
-flow_tcf_get_actions_and_size(const struct rte_flow_action actions[],
- uint64_t *action_flags)
+flow_tcf_get_actions_and_size(const struct rte_flow_action actions[])
{
int size = 0;
- uint64_t flags = 0;
size += SZ_NLATTR_NEST; /* TCA_FLOWER_ACT. */
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
@@ -1513,35 +1486,28 @@ flow_tcf_get_actions_and_size(const struct rte_flow_action actions[],
SZ_NLATTR_STRZ_OF("mirred") +
SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
SZ_NLATTR_TYPE_OF(struct tc_mirred);
- flags |= MLX5_FLOW_ACTION_PORT_ID;
break;
case RTE_FLOW_ACTION_TYPE_JUMP:
size += SZ_NLATTR_NEST + /* na_act_index. */
SZ_NLATTR_STRZ_OF("gact") +
SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
SZ_NLATTR_TYPE_OF(struct tc_gact);
- flags |= MLX5_FLOW_ACTION_JUMP;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
size += SZ_NLATTR_NEST + /* na_act_index. */
SZ_NLATTR_STRZ_OF("gact") +
SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
SZ_NLATTR_TYPE_OF(struct tc_gact);
- flags |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
break;
case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
- flags |= MLX5_FLOW_ACTION_OF_POP_VLAN;
goto action_of_vlan;
case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
- flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
goto action_of_vlan;
case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
- flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID;
goto action_of_vlan;
case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
- flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_PCP;
goto action_of_vlan;
action_of_vlan:
size += SZ_NLATTR_NEST + /* na_act_index. */
@@ -1563,8 +1529,7 @@ flow_tcf_get_actions_and_size(const struct rte_flow_action actions[],
case RTE_FLOW_ACTION_TYPE_DEC_TTL:
case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
- size += flow_tcf_get_pedit_actions_size(&actions,
- &flags);
+ size += flow_tcf_get_pedit_actions_size(&actions);
break;
default:
DRV_LOG(WARNING,
@@ -1574,7 +1539,6 @@ flow_tcf_get_actions_and_size(const struct rte_flow_action actions[],
break;
}
}
- *action_flags = flags;
return size;
}
@@ -1610,10 +1574,6 @@ flow_tcf_nl_brand(struct nlmsghdr *nlh, uint32_t handle)
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1625,7 +1585,6 @@ static struct mlx5_flow *
flow_tcf_prepare(const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
const struct rte_flow_action actions[],
- uint64_t *item_flags, uint64_t *action_flags,
struct rte_flow_error *error)
{
size_t size = sizeof(struct mlx5_flow) +
@@ -1635,8 +1594,8 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
struct nlmsghdr *nlh;
struct tcmsg *tcm;
- size += flow_tcf_get_items_and_size(attr, items, item_flags);
- size += flow_tcf_get_actions_and_size(actions, action_flags);
+ size += flow_tcf_get_items_and_size(attr, items);
+ size += flow_tcf_get_actions_and_size(actions);
dev_flow = rte_zmalloc(__func__, size, MNL_ALIGNTO);
if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index ab58c04db5..453c89e347 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1217,11 +1217,9 @@ flow_verbs_validate(struct rte_eth_dev *dev,
* The size of the memory needed for all actions.
*/
static int
-flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
- uint64_t *action_flags)
+flow_verbs_get_actions_and_size(const struct rte_flow_action actions[])
{
int size = 0;
- uint64_t detected_actions = 0;
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
switch (actions->type) {
@@ -1229,34 +1227,27 @@ flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
size += sizeof(struct ibv_flow_spec_action_tag);
- detected_actions |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
size += sizeof(struct ibv_flow_spec_action_tag);
- detected_actions |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
size += sizeof(struct ibv_flow_spec_action_drop);
- detected_actions |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
- detected_actions |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- detected_actions |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
#if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
size += sizeof(struct ibv_flow_spec_counter_action);
#endif
- detected_actions |= MLX5_FLOW_ACTION_COUNT;
break;
default:
break;
}
}
- *action_flags = detected_actions;
return size;
}
@@ -1274,83 +1265,54 @@ flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
* The size of the memory needed for all items.
*/
static int
-flow_verbs_get_items_and_size(const struct rte_flow_item items[],
- uint64_t *item_flags)
+flow_verbs_get_items_and_size(const struct rte_flow_item items[])
{
int size = 0;
- uint64_t detected_items = 0;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
- int tunnel = !!(detected_items & MLX5_FLOW_LAYER_TUNNEL);
-
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_VOID:
break;
case RTE_FLOW_ITEM_TYPE_ETH:
size += sizeof(struct ibv_flow_spec_eth);
- detected_items |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
- MLX5_FLOW_LAYER_OUTER_L2;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
size += sizeof(struct ibv_flow_spec_eth);
- detected_items |=
- tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
- MLX5_FLOW_LAYER_INNER_VLAN) :
- (MLX5_FLOW_LAYER_OUTER_L2 |
- MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
size += sizeof(struct ibv_flow_spec_ipv4_ext);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L3_IPV4 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV4;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
size += sizeof(struct ibv_flow_spec_ipv6);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L3_IPV6 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV6;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
size += sizeof(struct ibv_flow_spec_tcp_udp);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L4_UDP :
- MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
size += sizeof(struct ibv_flow_spec_tcp_udp);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L4_TCP :
- MLX5_FLOW_LAYER_OUTER_L4_TCP;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
size += sizeof(struct ibv_flow_spec_tunnel);
- detected_items |= MLX5_FLOW_LAYER_VXLAN;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
size += sizeof(struct ibv_flow_spec_tunnel);
- detected_items |= MLX5_FLOW_LAYER_VXLAN_GPE;
break;
#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
case RTE_FLOW_ITEM_TYPE_GRE:
size += sizeof(struct ibv_flow_spec_gre);
- detected_items |= MLX5_FLOW_LAYER_GRE;
break;
case RTE_FLOW_ITEM_TYPE_MPLS:
size += sizeof(struct ibv_flow_spec_mpls);
- detected_items |= MLX5_FLOW_LAYER_MPLS;
break;
#else
case RTE_FLOW_ITEM_TYPE_GRE:
size += sizeof(struct ibv_flow_spec_tunnel);
- detected_items |= MLX5_FLOW_LAYER_TUNNEL;
break;
#endif
default:
break;
}
}
- *item_flags = detected_items;
return size;
}
@@ -1365,10 +1327,6 @@ flow_verbs_get_items_and_size(const struct rte_flow_item items[],
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1380,15 +1338,13 @@ static struct mlx5_flow *
flow_verbs_prepare(const struct rte_flow_attr *attr __rte_unused,
const struct rte_flow_item items[],
const struct rte_flow_action actions[],
- uint64_t *item_flags,
- uint64_t *action_flags,
struct rte_flow_error *error)
{
uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
struct mlx5_flow *flow;
- size += flow_verbs_get_actions_and_size(actions, action_flags);
- size += flow_verbs_get_items_and_size(items, item_flags);
+ size += flow_verbs_get_actions_and_size(actions);
+ size += flow_verbs_get_items_and_size(items);
flow = rte_calloc(__func__, 1, size, 0);
if (!flow) {
rte_flow_error_set(error, ENOMEM,
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel
2018-11-02 21:08 [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
2018-11-02 21:08 ` [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct " Yongseok Koh
2018-11-02 21:08 ` [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
@ 2018-11-04 8:17 ` Ori Kam
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
3 siblings, 0 replies; 17+ messages in thread
From: Ori Kam @ 2018-11-04 8:17 UTC (permalink / raw)
To: Yongseok Koh, Shahaf Shuler; +Cc: dev
> -----Original Message-----
> From: Yongseok Koh
> Sent: Friday, November 2, 2018 11:08 PM
> To: Shahaf Shuler <shahafs@mellanox.com>
> Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>; Ori Kam
> <orika@mellanox.com>
> Subject: [PATCH 1/3] net/mlx5: fix Verbs flow tunnel
>
> 1) Fix layer parsing
> In translation of tunneled flows, dev_flow->layers must not be used to
> check tunneled layer as it contains all the layers parsed from
> flow_drv_prepare(). Checking tunneled layer is needed to set
> IBV_FLOW_SPEC_INNER and it should be based on dynamic parsing. With
> dev_flow->layers on a tunneled flow, items will always be interpreted as
> inner as dev_flow->layer already has all the items.
>
> 2) Refactoring code
> It is partly because flow_verbs_translate_item_*() sets layer flag. Same
> code is repeating in multiple locations and that could be error-prone.
>
> - Introduce VERBS_SPEC_INNER() to unify setting IBV_FLOW_SPEC_INNER.
> - flow_verbs_translate_item_*() doesn't set parsing result -
> MLX5_FLOW_LAYER_*.
> - flow_verbs_translate_item_*() doesn't set priority or adjust hashfields
> but does only item translation. Both have to be done outside.
> - Make more consistent between Verbs and DV.
>
> 3) Remove flow_verbs_mark_update()
> This code can never be reached as validation prohibits specifying mark and
> flag actions together. No need to convert flag to mark.
>
> Fixes: 84c406e74524 ("net/mlx5: add flow translate function")
> Cc: orika@mellanox.com
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_flow_verbs.c | 568 +++++++++++++++++-------------------
> -
> 1 file changed, 258 insertions(+), 310 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c
> b/drivers/net/mlx5/mlx5_flow_verbs.c
> index 2e506b91ad..ab58c04db5 100644
> --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> @@ -33,6 +33,9 @@
> #include "mlx5_glue.h"
> #include "mlx5_flow.h"
>
> +#define VERBS_SPEC_INNER(item_flags) \
> + (!!((item_flags) & MLX5_FLOW_LAYER_TUNNEL) ?
> IBV_FLOW_SPEC_INNER : 0)
> +
> /**
> * Create Verbs flow counter with Verbs library.
> *
> @@ -231,27 +234,26 @@ flow_verbs_counter_query(struct rte_eth_dev *dev
> __rte_unused,
> }
>
> /**
> - * Add a verbs item specification into @p flow.
> + * Add a verbs item specification into @p verbs.
> *
> - * @param[in, out] flow
> - * Pointer to flow structure.
> + * @param[out] verbs
> + * Pointer to verbs structure.
> * @param[in] src
> * Create specification.
> * @param[in] size
> * Size in bytes of the specification to copy.
> */
> static void
> -flow_verbs_spec_add(struct mlx5_flow *flow, void *src, unsigned int size)
> +flow_verbs_spec_add(struct mlx5_flow_verbs *verbs, void *src, unsigned int
> size)
> {
> - struct mlx5_flow_verbs *verbs = &flow->verbs;
> + void *dst;
>
> - if (verbs->specs) {
> - void *dst;
> -
> - dst = (void *)(verbs->specs + verbs->size);
> - memcpy(dst, src, size);
> - ++verbs->attr->num_of_specs;
> - }
> + if (!verbs)
> + return;
> + assert(verbs->specs);
> + dst = (void *)(verbs->specs + verbs->size);
> + memcpy(dst, src, size);
> + ++verbs->attr->num_of_specs;
> verbs->size += size;
> }
>
> @@ -260,24 +262,23 @@ flow_verbs_spec_add(struct mlx5_flow *flow, void
> *src, unsigned int size)
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> * @param[in] item_flags
> - * Bit field with all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to dev_flow structure.
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_eth(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags)
> {
> const struct rte_flow_item_eth *spec = item->spec;
> const struct rte_flow_item_eth *mask = item->mask;
> - const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
> const unsigned int size = sizeof(struct ibv_flow_spec_eth);
> struct ibv_flow_spec_eth eth = {
> - .type = IBV_FLOW_SPEC_ETH | (tunnel ?
> IBV_FLOW_SPEC_INNER : 0),
> + .type = IBV_FLOW_SPEC_ETH |
> VERBS_SPEC_INNER(item_flags),
> .size = size,
> };
>
> @@ -298,11 +299,8 @@ flow_verbs_translate_item_eth(const struct
> rte_flow_item *item,
> eth.val.src_mac[i] &= eth.mask.src_mac[i];
> }
> eth.val.ether_type &= eth.mask.ether_type;
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
> }
> - flow_verbs_spec_add(dev_flow, ð, size);
> - *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
> - MLX5_FLOW_LAYER_OUTER_L2;
> + flow_verbs_spec_add(&dev_flow->verbs, ð, size);
> }
>
> /**
> @@ -344,24 +342,24 @@ flow_verbs_item_vlan_update(struct ibv_flow_attr
> *attr,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> - * @param[in] item
> - * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that holds all detected items.
> * @param[in, out] dev_flow
> * Pointer to dev_flow structure.
> + * @param[in] item
> + * Item specification.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags)
> {
> const struct rte_flow_item_vlan *spec = item->spec;
> const struct rte_flow_item_vlan *mask = item->mask;
> unsigned int size = sizeof(struct ibv_flow_spec_eth);
> - const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
> + const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> struct ibv_flow_spec_eth eth = {
> - .type = IBV_FLOW_SPEC_ETH | (tunnel ?
> IBV_FLOW_SPEC_INNER : 0),
> + .type = IBV_FLOW_SPEC_ETH |
> VERBS_SPEC_INNER(item_flags),
> .size = size,
> };
> const uint32_t l2m = tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
> @@ -377,16 +375,10 @@ flow_verbs_translate_item_vlan(const struct
> rte_flow_item *item,
> eth.mask.ether_type = mask->inner_type;
> eth.val.ether_type &= eth.mask.ether_type;
> }
> - if (!(*item_flags & l2m)) {
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
> - flow_verbs_spec_add(dev_flow, ð, size);
> - } else {
> + if (!(item_flags & l2m))
> + flow_verbs_spec_add(&dev_flow->verbs, ð, size);
> + else
> flow_verbs_item_vlan_update(dev_flow->verbs.attr, ð);
> - size = 0; /* Only an update is done in eth specification. */
> - }
> - *item_flags |= tunnel ?
> - (MLX5_FLOW_LAYER_INNER_L2 |
> MLX5_FLOW_LAYER_INNER_VLAN) :
> - (MLX5_FLOW_LAYER_OUTER_L2 |
> MLX5_FLOW_LAYER_OUTER_VLAN);
> }
>
> /**
> @@ -394,32 +386,28 @@ flow_verbs_translate_item_vlan(const struct
> rte_flow_item *item,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_ipv4(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags)
> {
> const struct rte_flow_item_ipv4 *spec = item->spec;
> const struct rte_flow_item_ipv4 *mask = item->mask;
> - const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
> unsigned int size = sizeof(struct ibv_flow_spec_ipv4_ext);
> struct ibv_flow_spec_ipv4_ext ipv4 = {
> - .type = IBV_FLOW_SPEC_IPV4_EXT |
> - (tunnel ? IBV_FLOW_SPEC_INNER : 0),
> + .type = IBV_FLOW_SPEC_IPV4_EXT |
> VERBS_SPEC_INNER(item_flags),
> .size = size,
> };
>
> if (!mask)
> mask = &rte_flow_item_ipv4_mask;
> - *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
> - MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> if (spec) {
> ipv4.val = (struct ibv_flow_ipv4_ext_filter){
> .src_ip = spec->hdr.src_addr,
> @@ -439,12 +427,7 @@ flow_verbs_translate_item_ipv4(const struct
> rte_flow_item *item,
> ipv4.val.proto &= ipv4.mask.proto;
> ipv4.val.tos &= ipv4.mask.tos;
> }
> - dev_flow->verbs.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, tunnel,
> - MLX5_IPV4_LAYER_TYPES,
> - MLX5_IPV4_IBV_RX_HASH);
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L3;
> - flow_verbs_spec_add(dev_flow, &ipv4, size);
> + flow_verbs_spec_add(&dev_flow->verbs, &ipv4, size);
> }
>
> /**
> @@ -452,31 +435,28 @@ flow_verbs_translate_item_ipv4(const struct
> rte_flow_item *item,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_ipv6(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags)
> {
> const struct rte_flow_item_ipv6 *spec = item->spec;
> const struct rte_flow_item_ipv6 *mask = item->mask;
> - const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
> unsigned int size = sizeof(struct ibv_flow_spec_ipv6);
> struct ibv_flow_spec_ipv6 ipv6 = {
> - .type = IBV_FLOW_SPEC_IPV6 | (tunnel ?
> IBV_FLOW_SPEC_INNER : 0),
> + .type = IBV_FLOW_SPEC_IPV6 |
> VERBS_SPEC_INNER(item_flags),
> .size = size,
> };
>
> if (!mask)
> mask = &rte_flow_item_ipv6_mask;
> - *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
> - MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> if (spec) {
> unsigned int i;
> uint32_t vtc_flow_val;
> @@ -516,12 +496,7 @@ flow_verbs_translate_item_ipv6(const struct
> rte_flow_item *item,
> ipv6.val.next_hdr &= ipv6.mask.next_hdr;
> ipv6.val.hop_limit &= ipv6.mask.hop_limit;
> }
> - dev_flow->verbs.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, tunnel,
> - MLX5_IPV6_LAYER_TYPES,
> - MLX5_IPV6_IBV_RX_HASH);
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L3;
> - flow_verbs_spec_add(dev_flow, &ipv6, size);
> + flow_verbs_spec_add(&dev_flow->verbs, &ipv6, size);
> }
>
> /**
> @@ -529,46 +504,38 @@ flow_verbs_translate_item_ipv6(const struct
> rte_flow_item *item,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_udp(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_tcp(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags __rte_unused)
> {
> - const struct rte_flow_item_udp *spec = item->spec;
> - const struct rte_flow_item_udp *mask = item->mask;
> - const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
> + const struct rte_flow_item_tcp *spec = item->spec;
> + const struct rte_flow_item_tcp *mask = item->mask;
> unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
> - struct ibv_flow_spec_tcp_udp udp = {
> - .type = IBV_FLOW_SPEC_UDP | (tunnel ?
> IBV_FLOW_SPEC_INNER : 0),
> + struct ibv_flow_spec_tcp_udp tcp = {
> + .type = IBV_FLOW_SPEC_TCP |
> VERBS_SPEC_INNER(item_flags),
> .size = size,
> };
>
> if (!mask)
> - mask = &rte_flow_item_udp_mask;
> - *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
> - MLX5_FLOW_LAYER_OUTER_L4_UDP;
> + mask = &rte_flow_item_tcp_mask;
> if (spec) {
> - udp.val.dst_port = spec->hdr.dst_port;
> - udp.val.src_port = spec->hdr.src_port;
> - udp.mask.dst_port = mask->hdr.dst_port;
> - udp.mask.src_port = mask->hdr.src_port;
> + tcp.val.dst_port = spec->hdr.dst_port;
> + tcp.val.src_port = spec->hdr.src_port;
> + tcp.mask.dst_port = mask->hdr.dst_port;
> + tcp.mask.src_port = mask->hdr.src_port;
> /* Remove unwanted bits from values. */
> - udp.val.src_port &= udp.mask.src_port;
> - udp.val.dst_port &= udp.mask.dst_port;
> + tcp.val.src_port &= tcp.mask.src_port;
> + tcp.val.dst_port &= tcp.mask.dst_port;
> }
> - dev_flow->verbs.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_UDP,
> - (IBV_RX_HASH_SRC_PORT_UDP |
> - IBV_RX_HASH_DST_PORT_UDP));
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L4;
> - flow_verbs_spec_add(dev_flow, &udp, size);
> + flow_verbs_spec_add(&dev_flow->verbs, &tcp, size);
> }
>
> /**
> @@ -576,46 +543,38 @@ flow_verbs_translate_item_udp(const struct
> rte_flow_item *item,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_tcp(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_udp(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags __rte_unused)
> {
> - const struct rte_flow_item_tcp *spec = item->spec;
> - const struct rte_flow_item_tcp *mask = item->mask;
> - const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
> + const struct rte_flow_item_udp *spec = item->spec;
> + const struct rte_flow_item_udp *mask = item->mask;
> unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
> - struct ibv_flow_spec_tcp_udp tcp = {
> - .type = IBV_FLOW_SPEC_TCP | (tunnel ?
> IBV_FLOW_SPEC_INNER : 0),
> + struct ibv_flow_spec_tcp_udp udp = {
> + .type = IBV_FLOW_SPEC_UDP |
> VERBS_SPEC_INNER(item_flags),
> .size = size,
> };
>
> if (!mask)
> - mask = &rte_flow_item_tcp_mask;
> - *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
> - MLX5_FLOW_LAYER_OUTER_L4_TCP;
> + mask = &rte_flow_item_udp_mask;
> if (spec) {
> - tcp.val.dst_port = spec->hdr.dst_port;
> - tcp.val.src_port = spec->hdr.src_port;
> - tcp.mask.dst_port = mask->hdr.dst_port;
> - tcp.mask.src_port = mask->hdr.src_port;
> + udp.val.dst_port = spec->hdr.dst_port;
> + udp.val.src_port = spec->hdr.src_port;
> + udp.mask.dst_port = mask->hdr.dst_port;
> + udp.mask.src_port = mask->hdr.src_port;
> /* Remove unwanted bits from values. */
> - tcp.val.src_port &= tcp.mask.src_port;
> - tcp.val.dst_port &= tcp.mask.dst_port;
> + udp.val.src_port &= udp.mask.src_port;
> + udp.val.dst_port &= udp.mask.dst_port;
> }
> - dev_flow->verbs.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_TCP,
> - (IBV_RX_HASH_SRC_PORT_TCP |
> - IBV_RX_HASH_DST_PORT_TCP));
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L4;
> - flow_verbs_spec_add(dev_flow, &tcp, size);
> + flow_verbs_spec_add(&dev_flow->verbs, &udp, size);
> }
>
> /**
> @@ -623,17 +582,17 @@ flow_verbs_translate_item_tcp(const struct
> rte_flow_item *item,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags __rte_unused)
> {
> const struct rte_flow_item_vxlan *spec = item->spec;
> const struct rte_flow_item_vxlan *mask = item->mask;
> @@ -657,9 +616,7 @@ flow_verbs_translate_item_vxlan(const struct
> rte_flow_item *item,
> /* Remove unwanted bits from values. */
> vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
> }
> - flow_verbs_spec_add(dev_flow, &vxlan, size);
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
> - *item_flags |= MLX5_FLOW_LAYER_VXLAN;
> + flow_verbs_spec_add(&dev_flow->verbs, &vxlan, size);
> }
>
> /**
> @@ -667,17 +624,17 @@ flow_verbs_translate_item_vxlan(const struct
> rte_flow_item *item,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_vxlan_gpe(const struct rte_flow_item *item,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item,
> + uint64_t item_flags __rte_unused)
> {
> const struct rte_flow_item_vxlan_gpe *spec = item->spec;
> const struct rte_flow_item_vxlan_gpe *mask = item->mask;
> @@ -701,9 +658,7 @@ flow_verbs_translate_item_vxlan_gpe(const struct
> rte_flow_item *item,
> /* Remove unwanted bits from values. */
> vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
> }
> - flow_verbs_spec_add(dev_flow, &vxlan_gpe, size);
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
> - *item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
> + flow_verbs_spec_add(&dev_flow->verbs, &vxlan_gpe, size);
> }
>
> /**
> @@ -763,17 +718,17 @@ flow_verbs_item_gre_ip_protocol_update(struct
> ibv_flow_attr *attr,
> * the input is valid and that there is space to insert the requested item
> * into the flow.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_gre(const struct rte_flow_item *item
> __rte_unused,
> - uint64_t *item_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
> + const struct rte_flow_item *item __rte_unused,
> + uint64_t item_flags)
> {
> struct mlx5_flow_verbs *verbs = &dev_flow->verbs;
> #ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
> @@ -804,7 +759,7 @@ flow_verbs_translate_item_gre(const struct
> rte_flow_item *item __rte_unused,
> tunnel.val.key &= tunnel.mask.key;
> }
> #endif
> - if (*item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4)
> + if (item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4)
> flow_verbs_item_gre_ip_protocol_update(verbs->attr,
>
> IBV_FLOW_SPEC_IPV4_EXT,
> IPPROTO_GRE);
> @@ -812,9 +767,7 @@ flow_verbs_translate_item_gre(const struct
> rte_flow_item *item __rte_unused,
> flow_verbs_item_gre_ip_protocol_update(verbs->attr,
> IBV_FLOW_SPEC_IPV6,
> IPPROTO_GRE);
> - flow_verbs_spec_add(dev_flow, &tunnel, size);
> - verbs->attr->priority = MLX5_PRIORITY_MAP_L2;
> - *item_flags |= MLX5_FLOW_LAYER_GRE;
> + flow_verbs_spec_add(verbs, &tunnel, size);
> }
>
> /**
> @@ -822,17 +775,17 @@ flow_verbs_translate_item_gre(const struct
> rte_flow_item *item __rte_unused,
> * the input is valid and that there is space to insert the requested action
> * into the flow. This function also return the action that was added.
> *
> + * @param[in, out] dev_flow
> + * Pointer to dev_flow structure.
> * @param[in] item
> * Item specification.
> - * @param[in, out] item_flags
> - * Bit mask that marks all detected items.
> - * @param[in, out] dev_flow
> - * Pointer to sepacific flow structure.
> + * @param[in] item_flags
> + * Parsed item flags.
> */
> static void
> -flow_verbs_translate_item_mpls(const struct rte_flow_item *item
> __rte_unused,
> - uint64_t *action_flags __rte_unused,
> - struct mlx5_flow *dev_flow __rte_unused)
> +flow_verbs_translate_item_mpls(struct mlx5_flow *dev_flow __rte_unused,
> + const struct rte_flow_item *item __rte_unused,
> + uint64_t item_flags __rte_unused)
> {
> #ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
> const struct rte_flow_item_mpls *spec = item->spec;
> @@ -851,25 +804,24 @@ flow_verbs_translate_item_mpls(const struct
> rte_flow_item *item __rte_unused,
> /* Remove unwanted bits from values. */
> mpls.val.label &= mpls.mask.label;
> }
> - flow_verbs_spec_add(dev_flow, &mpls, size);
> - dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
> - *action_flags |= MLX5_FLOW_LAYER_MPLS;
> + flow_verbs_spec_add(&dev_flow->verbs, &mpls, size);
> #endif
> }
>
> /**
> * Convert the @p action into a Verbs specification. This function assumes that
> * the input is valid and that there is space to insert the requested action
> - * into the flow. This function also return the action that was added.
> + * into the flow.
> *
> - * @param[in, out] action_flags
> - * Pointer to the detected actions.
> * @param[in] dev_flow
> * Pointer to mlx5_flow.
> + * @param[in] action
> + * Action configuration.
> */
> static void
> -flow_verbs_translate_action_drop(uint64_t *action_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_action_drop
> + (struct mlx5_flow *dev_flow,
> + const struct rte_flow_action *action __rte_unused)
> {
> unsigned int size = sizeof(struct ibv_flow_spec_action_drop);
> struct ibv_flow_spec_action_drop drop = {
> @@ -877,26 +829,22 @@ flow_verbs_translate_action_drop(uint64_t
> *action_flags,
> .size = size,
> };
>
> - flow_verbs_spec_add(dev_flow, &drop, size);
> - *action_flags |= MLX5_FLOW_ACTION_DROP;
> + flow_verbs_spec_add(&dev_flow->verbs, &drop, size);
> }
>
> /**
> * Convert the @p action into a Verbs specification. This function assumes that
> * the input is valid and that there is space to insert the requested action
> - * into the flow. This function also return the action that was added.
> + * into the flow.
> *
> - * @param[in] action
> - * Action configuration.
> - * @param[in, out] action_flags
> - * Pointer to the detected actions.
> * @param[in] dev_flow
> * Pointer to mlx5_flow.
> + * @param[in] action
> + * Action configuration.
> */
> static void
> -flow_verbs_translate_action_queue(const struct rte_flow_action *action,
> - uint64_t *action_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_action_queue(struct mlx5_flow *dev_flow,
> + const struct rte_flow_action *action)
> {
> const struct rte_flow_action_queue *queue = action->conf;
> struct rte_flow *flow = dev_flow->flow;
> @@ -904,13 +852,12 @@ flow_verbs_translate_action_queue(const struct
> rte_flow_action *action,
> if (flow->queue)
> (*flow->queue)[0] = queue->index;
> flow->rss.queue_num = 1;
> - *action_flags |= MLX5_FLOW_ACTION_QUEUE;
> }
>
> /**
> * Convert the @p action into a Verbs specification. This function assumes that
> * the input is valid and that there is space to insert the requested action
> - * into the flow. This function also return the action that was added.
> + * into the flow.
> *
> * @param[in] action
> * Action configuration.
> @@ -920,9 +867,8 @@ flow_verbs_translate_action_queue(const struct
> rte_flow_action *action,
> * Pointer to mlx5_flow.
> */
> static void
> -flow_verbs_translate_action_rss(const struct rte_flow_action *action,
> - uint64_t *action_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_action_rss(struct mlx5_flow *dev_flow,
> + const struct rte_flow_action *action)
> {
> const struct rte_flow_action_rss *rss = action->conf;
> struct rte_flow *flow = dev_flow->flow;
> @@ -934,26 +880,22 @@ flow_verbs_translate_action_rss(const struct
> rte_flow_action *action,
> memcpy(flow->key, rss->key, MLX5_RSS_HASH_KEY_LEN);
> flow->rss.types = rss->types;
> flow->rss.level = rss->level;
> - *action_flags |= MLX5_FLOW_ACTION_RSS;
> }
>
> /**
> * Convert the @p action into a Verbs specification. This function assumes that
> * the input is valid and that there is space to insert the requested action
> - * into the flow. This function also return the action that was added.
> + * into the flow.
> *
> - * @param[in] action
> - * Action configuration.
> - * @param[in, out] action_flags
> - * Pointer to the detected actions.
> * @param[in] dev_flow
> * Pointer to mlx5_flow.
> + * @param[in] action
> + * Action configuration.
> */
> static void
> flow_verbs_translate_action_flag
> - (const struct rte_flow_action *action __rte_unused,
> - uint64_t *action_flags,
> - struct mlx5_flow *dev_flow)
> + (struct mlx5_flow *dev_flow,
> + const struct rte_flow_action *action __rte_unused)
> {
> unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
> struct ibv_flow_spec_action_tag tag = {
> @@ -961,87 +903,44 @@ flow_verbs_translate_action_flag
> .size = size,
> .tag_id = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT),
> };
> - *action_flags |= MLX5_FLOW_ACTION_MARK;
> - flow_verbs_spec_add(dev_flow, &tag, size);
> -}
>
> -/**
> - * Update verbs specification to modify the flag to mark.
> - *
> - * @param[in, out] verbs
> - * Pointer to the mlx5_flow_verbs structure.
> - * @param[in] mark_id
> - * Mark identifier to replace the flag.
> - */
> -static void
> -flow_verbs_mark_update(struct mlx5_flow_verbs *verbs, uint32_t mark_id)
> -{
> - struct ibv_spec_header *hdr;
> - int i;
> -
> - if (!verbs)
> - return;
> - /* Update Verbs specification. */
> - hdr = (struct ibv_spec_header *)verbs->specs;
> - if (!hdr)
> - return;
> - for (i = 0; i != verbs->attr->num_of_specs; ++i) {
> - if (hdr->type == IBV_FLOW_SPEC_ACTION_TAG) {
> - struct ibv_flow_spec_action_tag *t =
> - (struct ibv_flow_spec_action_tag *)hdr;
> -
> - t->tag_id = mlx5_flow_mark_set(mark_id);
> - }
> - hdr = (struct ibv_spec_header *)((uintptr_t)hdr + hdr->size);
> - }
> + flow_verbs_spec_add(&dev_flow->verbs, &tag, size);
> }
>
> /**
> * Convert the @p action into a Verbs specification. This function assumes that
> * the input is valid and that there is space to insert the requested action
> - * into the flow. This function also return the action that was added.
> + * into the flow.
> *
> - * @param[in] action
> - * Action configuration.
> - * @param[in, out] action_flags
> - * Pointer to the detected actions.
> * @param[in] dev_flow
> * Pointer to mlx5_flow.
> + * @param[in] action
> + * Action configuration.
> */
> static void
> -flow_verbs_translate_action_mark(const struct rte_flow_action *action,
> - uint64_t *action_flags,
> - struct mlx5_flow *dev_flow)
> +flow_verbs_translate_action_mark(struct mlx5_flow *dev_flow,
> + const struct rte_flow_action *action)
> {
> const struct rte_flow_action_mark *mark = action->conf;
> unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
> struct ibv_flow_spec_action_tag tag = {
> .type = IBV_FLOW_SPEC_ACTION_TAG,
> .size = size,
> + .tag_id = mlx5_flow_mark_set(mark->id),
> };
> - struct mlx5_flow_verbs *verbs = &dev_flow->verbs;
>
> - if (*action_flags & MLX5_FLOW_ACTION_FLAG) {
> - flow_verbs_mark_update(verbs, mark->id);
> - size = 0;
> - } else {
> - tag.tag_id = mlx5_flow_mark_set(mark->id);
> - flow_verbs_spec_add(dev_flow, &tag, size);
> - }
> - *action_flags |= MLX5_FLOW_ACTION_MARK;
> + flow_verbs_spec_add(&dev_flow->verbs, &tag, size);
> }
>
> /**
> * Convert the @p action into a Verbs specification. This function assumes that
> * the input is valid and that there is space to insert the requested action
> - * into the flow. This function also return the action that was added.
> + * into the flow.
> *
> * @param[in] dev
> * Pointer to the Ethernet device structure.
> * @param[in] action
> * Action configuration.
> - * @param[in, out] action_flags
> - * Pointer to the detected actions.
> * @param[in] dev_flow
> * Pointer to mlx5_flow.
> * @param[out] error
> @@ -1051,10 +950,9 @@ flow_verbs_translate_action_mark(const struct
> rte_flow_action *action,
> * 0 On success else a negative errno value is returned and rte_errno is set.
> */
> static int
> -flow_verbs_translate_action_count(struct rte_eth_dev *dev,
> +flow_verbs_translate_action_count(struct mlx5_flow *dev_flow,
> const struct rte_flow_action *action,
> - uint64_t *action_flags,
> - struct mlx5_flow *dev_flow,
> + struct rte_eth_dev *dev,
> struct rte_flow_error *error)
> {
> const struct rte_flow_action_count *count = action->conf;
> @@ -1078,13 +976,12 @@ flow_verbs_translate_action_count(struct
> rte_eth_dev *dev,
> "cannot get counter"
> " context.");
> }
> - *action_flags |= MLX5_FLOW_ACTION_COUNT;
> #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
> counter.counter_set_handle = flow->counter->cs->handle;
> - flow_verbs_spec_add(dev_flow, &counter, size);
> + flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
> #elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
> counter.counters = flow->counter->cs;
> - flow_verbs_spec_add(dev_flow, &counter, size);
> + flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
> #endif
> return 0;
> }
> @@ -1116,7 +1013,6 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> int ret;
> uint64_t action_flags = 0;
> uint64_t item_flags = 0;
> - int tunnel = 0;
> uint8_t next_protocol = 0xff;
>
> if (items == NULL)
> @@ -1125,9 +1021,9 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> if (ret < 0)
> return ret;
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> int ret = 0;
>
> - tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> switch (items->type) {
> case RTE_FLOW_ITEM_TYPE_VOID:
> break;
> @@ -1144,8 +1040,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> error);
> if (ret < 0)
> return ret;
> - item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_VLAN :
> -
> MLX5_FLOW_LAYER_OUTER_VLAN;
> + item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2
> |
> +
> MLX5_FLOW_LAYER_INNER_VLAN) :
> + (MLX5_FLOW_LAYER_OUTER_L2 |
> +
> MLX5_FLOW_LAYER_OUTER_VLAN);
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> ret = mlx5_flow_validate_item_ipv4(items, item_flags,
> @@ -1395,8 +1293,11 @@ flow_verbs_get_items_and_size(const struct
> rte_flow_item items[],
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> size += sizeof(struct ibv_flow_spec_eth);
> - detected_items |= tunnel ?
> MLX5_FLOW_LAYER_INNER_VLAN :
> -
> MLX5_FLOW_LAYER_OUTER_VLAN;
> + detected_items |=
> + tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
> + MLX5_FLOW_LAYER_INNER_VLAN) :
> + (MLX5_FLOW_LAYER_OUTER_L2 |
> + MLX5_FLOW_LAYER_OUTER_VLAN);
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> size += sizeof(struct ibv_flow_spec_ipv4_ext);
> @@ -1528,50 +1429,48 @@ flow_verbs_translate(struct rte_eth_dev *dev,
> const struct rte_flow_action actions[],
> struct rte_flow_error *error)
> {
> - uint64_t action_flags = 0;
> + struct rte_flow *flow = dev_flow->flow;
> uint64_t item_flags = 0;
> + uint64_t action_flags = 0;
> uint64_t priority = attr->priority;
> + uint32_t subpriority = 0;
> struct priv *priv = dev->data->dev_private;
>
> if (priority == MLX5_FLOW_PRIO_RSVD)
> priority = priv->config.flow_prio - 1;
> for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> int ret;
> +
> switch (actions->type) {
> case RTE_FLOW_ACTION_TYPE_VOID:
> break;
> case RTE_FLOW_ACTION_TYPE_FLAG:
> - flow_verbs_translate_action_flag(actions,
> - &action_flags,
> - dev_flow);
> + flow_verbs_translate_action_flag(dev_flow, actions);
> + action_flags |= MLX5_FLOW_ACTION_FLAG;
> break;
> case RTE_FLOW_ACTION_TYPE_MARK:
> - flow_verbs_translate_action_mark(actions,
> - &action_flags,
> - dev_flow);
> + flow_verbs_translate_action_mark(dev_flow, actions);
> + action_flags |= MLX5_FLOW_ACTION_MARK;
> break;
> case RTE_FLOW_ACTION_TYPE_DROP:
> - flow_verbs_translate_action_drop(&action_flags,
> - dev_flow);
> + flow_verbs_translate_action_drop(dev_flow, actions);
> + action_flags |= MLX5_FLOW_ACTION_DROP;
> break;
> case RTE_FLOW_ACTION_TYPE_QUEUE:
> - flow_verbs_translate_action_queue(actions,
> - &action_flags,
> - dev_flow);
> + flow_verbs_translate_action_queue(dev_flow, actions);
> + action_flags |= MLX5_FLOW_ACTION_QUEUE;
> break;
> case RTE_FLOW_ACTION_TYPE_RSS:
> - flow_verbs_translate_action_rss(actions,
> - &action_flags,
> - dev_flow);
> + flow_verbs_translate_action_rss(dev_flow, actions);
> + action_flags |= MLX5_FLOW_ACTION_RSS;
> break;
> case RTE_FLOW_ACTION_TYPE_COUNT:
> - ret = flow_verbs_translate_action_count(dev,
> + ret = flow_verbs_translate_action_count(dev_flow,
> actions,
> - &action_flags,
> - dev_flow,
> - error);
> + dev, error);
> if (ret < 0)
> return ret;
> + action_flags |= MLX5_FLOW_ACTION_COUNT;
> break;
> default:
> return rte_flow_error_set(error, ENOTSUP,
> @@ -1580,51 +1479,100 @@ flow_verbs_translate(struct rte_eth_dev *dev,
> "action not supported");
> }
> }
> - /* Device flow should have action flags by flow_drv_prepare(). */
> - assert(dev_flow->flow->actions == action_flags);
> + flow->actions = action_flags;
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> +
> switch (items->type) {
> case RTE_FLOW_ITEM_TYPE_VOID:
> break;
> case RTE_FLOW_ITEM_TYPE_ETH:
> - flow_verbs_translate_item_eth(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_eth(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2
> :
> + MLX5_FLOW_LAYER_OUTER_L2;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> - flow_verbs_translate_item_vlan(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_vlan(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2
> |
> +
> MLX5_FLOW_LAYER_INNER_VLAN) :
> + (MLX5_FLOW_LAYER_OUTER_L2 |
> +
> MLX5_FLOW_LAYER_OUTER_VLAN);
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> - flow_verbs_translate_item_ipv4(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_ipv4(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L3;
> + dev_flow->verbs.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel,
> + MLX5_IPV4_LAYER_TYPES,
> + MLX5_IPV4_IBV_RX_HASH);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L3_IPV4 :
> +
> MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV6:
> - flow_verbs_translate_item_ipv6(items, &item_flags,
> - dev_flow);
> - break;
> - case RTE_FLOW_ITEM_TYPE_UDP:
> - flow_verbs_translate_item_udp(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_ipv6(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L3;
> + dev_flow->verbs.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel,
> + MLX5_IPV6_LAYER_TYPES,
> + MLX5_IPV6_IBV_RX_HASH);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L3_IPV6 :
> +
> MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> break;
> case RTE_FLOW_ITEM_TYPE_TCP:
> - flow_verbs_translate_item_tcp(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_tcp(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L4;
> + dev_flow->verbs.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel, ETH_RSS_TCP,
> + (IBV_RX_HASH_SRC_PORT_TCP |
> + IBV_RX_HASH_DST_PORT_TCP));
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L4_TCP :
> +
> MLX5_FLOW_LAYER_OUTER_L4_TCP;
> + break;
> + case RTE_FLOW_ITEM_TYPE_UDP:
> + flow_verbs_translate_item_udp(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L4;
> + dev_flow->verbs.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel, ETH_RSS_UDP,
> + (IBV_RX_HASH_SRC_PORT_UDP |
> + IBV_RX_HASH_DST_PORT_UDP));
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L4_UDP :
> +
> MLX5_FLOW_LAYER_OUTER_L4_UDP;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN:
> - flow_verbs_translate_item_vxlan(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_vxlan(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= MLX5_FLOW_LAYER_VXLAN;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> - flow_verbs_translate_item_vxlan_gpe(items,
> &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_vxlan_gpe(dev_flow,
> items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
> break;
> case RTE_FLOW_ITEM_TYPE_GRE:
> - flow_verbs_translate_item_gre(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_gre(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= MLX5_FLOW_LAYER_GRE;
> break;
> case RTE_FLOW_ITEM_TYPE_MPLS:
> - flow_verbs_translate_item_mpls(items, &item_flags,
> - dev_flow);
> + flow_verbs_translate_item_mpls(dev_flow, items,
> + item_flags);
> + subpriority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= MLX5_FLOW_LAYER_MPLS;
> break;
> default:
> return rte_flow_error_set(error, ENOTSUP,
> @@ -1633,9 +1581,9 @@ flow_verbs_translate(struct rte_eth_dev *dev,
> "item not supported");
> }
> }
> + dev_flow->layers = item_flags;
> dev_flow->verbs.attr->priority =
> - mlx5_flow_adjust_priority(dev, priority,
> - dev_flow->verbs.attr->priority);
> + mlx5_flow_adjust_priority(dev, priority, subpriority);
> return 0;
> }
>
> --
> 2.11.0
Thanks,
Acked-by: Ori Kam <orika@mellanox.com>
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-02 21:08 ` [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct " Yongseok Koh
@ 2018-11-04 8:22 ` Ori Kam
2018-11-05 5:37 ` Yongseok Koh
0 siblings, 1 reply; 17+ messages in thread
From: Ori Kam @ 2018-11-04 8:22 UTC (permalink / raw)
To: Yongseok Koh, Shahaf Shuler; +Cc: dev
> -----Original Message-----
> From: Yongseok Koh
> Sent: Friday, November 2, 2018 11:08 PM
> To: Shahaf Shuler <shahafs@mellanox.com>
> Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>; Ori Kam
> <orika@mellanox.com>
> Subject: [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
>
> 1) Fix layer parsing
> In translation of tunneled flows, dev_flow->layers must not be used to
> check tunneled layer as it contains all the layers parsed from
> flow_drv_prepare(). Checking tunneled layer is needed to distinguish
> between outer and inner item. This should be based on dynamic parsing. With
> dev_flow->layers on a tunneled flow, items will always be interpreted as
> inner as dev_flow->layer already has all the items. Dynamic parsing
> (item_flags) is added as there's no such code.
>
> 2) Refactoring code
> - flow_dv_create_item() and flow_dv_create_action() are merged into
> flow_dv_translate() for consistency with Verbs and *_validate().
I don't like the idea of combining 2 distinct functions into one.
I think a function should be as short as possible and do only one thing,
if there is no good reason why two functions should be combined they should not
be combined.
If you want to align both the Direct Verbs and Verbs I think we can split the Verbs
code.
>
> Fixes: 246636411536 ("net/mlx5: fix flow tunnel handling")
> Fixes: d02cb0691299 ("net/mlx5: add Direct Verbs translate actions")
> Fixes: fc2c498ccb94 ("net/mlx5: add Direct Verbs translate items")
> Cc: orika@mellanox.com
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_flow_dv.c | 485 +++++++++++++++++++--------------------
> -
> 1 file changed, 232 insertions(+), 253 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c
> index c11ecd4c1f..44e2a920eb 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -1602,248 +1602,6 @@ flow_dv_translate_item_meta(void *matcher,
> void *key,
> }
> }
>
> -/**
> - * Update the matcher and the value based the selected item.
> - *
> - * @param[in, out] matcher
> - * Flow matcher.
> - * @param[in, out] key
> - * Flow matcher value.
> - * @param[in] item
> - * Flow pattern to translate.
> - * @param[in, out] dev_flow
> - * Pointer to the mlx5_flow.
> - * @param[in] inner
> - * Item is inner pattern.
> - */
> -static void
> -flow_dv_create_item(void *matcher, void *key,
> - const struct rte_flow_item *item,
> - struct mlx5_flow *dev_flow,
> - int inner)
> -{
> - struct mlx5_flow_dv_matcher *tmatcher = matcher;
> -
> - switch (item->type) {
> - case RTE_FLOW_ITEM_TYPE_ETH:
> - flow_dv_translate_item_eth(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L2;
> - break;
> - case RTE_FLOW_ITEM_TYPE_VLAN:
> - flow_dv_translate_item_vlan(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_IPV4:
> - flow_dv_translate_item_ipv4(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L3;
> - dev_flow->dv.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - MLX5_IPV4_LAYER_TYPES,
> - MLX5_IPV4_IBV_RX_HASH);
> - break;
> - case RTE_FLOW_ITEM_TYPE_IPV6:
> - flow_dv_translate_item_ipv6(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L3;
> - dev_flow->dv.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - MLX5_IPV6_LAYER_TYPES,
> - MLX5_IPV6_IBV_RX_HASH);
> - break;
> - case RTE_FLOW_ITEM_TYPE_TCP:
> - flow_dv_translate_item_tcp(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L4;
> - dev_flow->dv.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - ETH_RSS_TCP,
> -
> (IBV_RX_HASH_SRC_PORT_TCP |
> -
> IBV_RX_HASH_DST_PORT_TCP));
> - break;
> - case RTE_FLOW_ITEM_TYPE_UDP:
> - flow_dv_translate_item_udp(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L4;
> - dev_flow->verbs.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - ETH_RSS_UDP,
> -
> (IBV_RX_HASH_SRC_PORT_UDP |
> -
> IBV_RX_HASH_DST_PORT_UDP));
> - break;
> - case RTE_FLOW_ITEM_TYPE_GRE:
> - flow_dv_translate_item_gre(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_NVGRE:
> - flow_dv_translate_item_nvgre(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_VXLAN:
> - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> - flow_dv_translate_item_vxlan(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_META:
> - flow_dv_translate_item_meta(tmatcher->mask.buf, key, item);
> - break;
> - default:
> - break;
> - }
> -}
> -
> -/**
> - * Store the requested actions in an array.
> - *
> - * @param[in] dev
> - * Pointer to rte_eth_dev structure.
> - * @param[in] action
> - * Flow action to translate.
> - * @param[in, out] dev_flow
> - * Pointer to the mlx5_flow.
> - * @param[in] attr
> - * Pointer to the flow attributes.
> - * @param[out] error
> - * Pointer to the error structure.
> - *
> - * @return
> - * 0 on success, a negative errno value otherwise and rte_errno is set.
> - */
> -static int
> -flow_dv_create_action(struct rte_eth_dev *dev,
> - const struct rte_flow_action *action,
> - struct mlx5_flow *dev_flow,
> - const struct rte_flow_attr *attr,
> - struct rte_flow_error *error)
> -{
> - const struct rte_flow_action_queue *queue;
> - const struct rte_flow_action_rss *rss;
> - int actions_n = dev_flow->dv.actions_n;
> - struct rte_flow *flow = dev_flow->flow;
> - const struct rte_flow_action *action_ptr = action;
> -
> - switch (action->type) {
> - case RTE_FLOW_ACTION_TYPE_VOID:
> - break;
> - case RTE_FLOW_ACTION_TYPE_FLAG:
> - dev_flow->dv.actions[actions_n].type =
> MLX5DV_FLOW_ACTION_TAG;
> - dev_flow->dv.actions[actions_n].tag_value =
> - mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
> - actions_n++;
> - flow->actions |= MLX5_FLOW_ACTION_FLAG;
> - break;
> - case RTE_FLOW_ACTION_TYPE_MARK:
> - dev_flow->dv.actions[actions_n].type =
> MLX5DV_FLOW_ACTION_TAG;
> - dev_flow->dv.actions[actions_n].tag_value =
> - mlx5_flow_mark_set
> - (((const struct rte_flow_action_mark *)
> - (action->conf))->id);
> - flow->actions |= MLX5_FLOW_ACTION_MARK;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_DROP:
> - dev_flow->dv.actions[actions_n].type =
> MLX5DV_FLOW_ACTION_DROP;
> - flow->actions |= MLX5_FLOW_ACTION_DROP;
> - break;
> - case RTE_FLOW_ACTION_TYPE_QUEUE:
> - queue = action->conf;
> - flow->rss.queue_num = 1;
> - (*flow->queue)[0] = queue->index;
> - flow->actions |= MLX5_FLOW_ACTION_QUEUE;
> - break;
> - case RTE_FLOW_ACTION_TYPE_RSS:
> - rss = action->conf;
> - if (flow->queue)
> - memcpy((*flow->queue), rss->queue,
> - rss->queue_num * sizeof(uint16_t));
> - flow->rss.queue_num = rss->queue_num;
> - memcpy(flow->key, rss->key, MLX5_RSS_HASH_KEY_LEN);
> - flow->rss.types = rss->types;
> - flow->rss.level = rss->level;
> - /* Added to array only in apply since we need the QP */
> - flow->actions |= MLX5_FLOW_ACTION_RSS;
> - break;
> - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
> - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
> - if (flow_dv_create_action_l2_encap(dev, action,
> - dev_flow, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap->verbs_action;
> - flow->actions |= action->type ==
> - RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
> - MLX5_FLOW_ACTION_VXLAN_ENCAP :
> - MLX5_FLOW_ACTION_NVGRE_ENCAP;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
> - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
> - if (flow_dv_create_action_l2_decap(dev, dev_flow, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap->verbs_action;
> - flow->actions |= action->type ==
> - RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
> - MLX5_FLOW_ACTION_VXLAN_DECAP :
> - MLX5_FLOW_ACTION_NVGRE_DECAP;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
> - /* Handle encap action with preceding decap */
> - if (flow->actions & MLX5_FLOW_ACTION_RAW_DECAP) {
> - if (flow_dv_create_action_raw_encap(dev, action,
> - dev_flow,
> - attr, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap-
> >verbs_action;
> - } else {
> - /* Handle encap action without preceding decap */
> - if (flow_dv_create_action_l2_encap(dev, action,
> - dev_flow, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap-
> >verbs_action;
> - }
> - flow->actions |= MLX5_FLOW_ACTION_RAW_ENCAP;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
> - /* Check if this decap action is followed by encap. */
> - for (; action_ptr->type != RTE_FLOW_ACTION_TYPE_END &&
> - action_ptr->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
> - action_ptr++) {
> - }
> - /* Handle decap action only if it isn't followed by encap */
> - if (action_ptr->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
> - if (flow_dv_create_action_l2_decap(dev, dev_flow,
> - error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap-
> >verbs_action;
> - actions_n++;
> - }
> - /* If decap is followed by encap, handle it at encap case. */
> - flow->actions |= MLX5_FLOW_ACTION_RAW_DECAP;
> - break;
> - default:
> - break;
> - }
> - dev_flow->dv.actions_n = actions_n;
> - return 0;
> -}
> -
> static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
>
> #define HEADER_IS_ZERO(match_criteria, headers)
> \
> @@ -1985,34 +1743,255 @@ flow_dv_translate(struct rte_eth_dev *dev,
> struct rte_flow_error *error)
> {
> struct priv *priv = dev->data->dev_private;
> + struct rte_flow *flow = dev_flow->flow;
> + uint64_t item_flags = 0;
> + uint64_t action_flags = 0;
> uint64_t priority = attr->priority;
> struct mlx5_flow_dv_matcher matcher = {
> .mask = {
> .size = sizeof(matcher.mask.buf),
> },
> };
> - void *match_value = dev_flow->dv.value.buf;
> - int tunnel = 0;
> + int actions_n = 0;
>
> if (priority == MLX5_FLOW_PRIO_RSVD)
> priority = priv->config.flow_prio - 1;
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> - tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
> - flow_dv_create_item(&matcher, match_value, items,
> dev_flow,
> - tunnel);
> + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> + void *match_mask = matcher.mask.buf;
> + void *match_value = dev_flow->dv.value.buf;
> +
> + switch (items->type) {
> + case RTE_FLOW_ITEM_TYPE_ETH:
> + flow_dv_translate_item_eth(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2
> :
> + MLX5_FLOW_LAYER_OUTER_L2;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VLAN:
> + flow_dv_translate_item_vlan(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2
> |
> +
> MLX5_FLOW_LAYER_INNER_VLAN) :
> + (MLX5_FLOW_LAYER_OUTER_L2 |
> +
> MLX5_FLOW_LAYER_OUTER_VLAN);
> + break;
> + case RTE_FLOW_ITEM_TYPE_IPV4:
> + flow_dv_translate_item_ipv4(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L3;
> + dev_flow->dv.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel,
> + MLX5_IPV4_LAYER_TYPES,
> + MLX5_IPV4_IBV_RX_HASH);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L3_IPV4 :
> +
> MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> + break;
> + case RTE_FLOW_ITEM_TYPE_IPV6:
> + flow_dv_translate_item_ipv6(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L3;
> + dev_flow->dv.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel,
> + MLX5_IPV6_LAYER_TYPES,
> + MLX5_IPV6_IBV_RX_HASH);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L3_IPV6 :
> +
> MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> + break;
> + case RTE_FLOW_ITEM_TYPE_TCP:
> + flow_dv_translate_item_tcp(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L4;
> + dev_flow->dv.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel, ETH_RSS_TCP,
> + IBV_RX_HASH_SRC_PORT_TCP |
> + IBV_RX_HASH_DST_PORT_TCP);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L4_TCP :
> +
> MLX5_FLOW_LAYER_OUTER_L4_TCP;
> + break;
> + case RTE_FLOW_ITEM_TYPE_UDP:
> + flow_dv_translate_item_udp(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L4;
> + dev_flow->verbs.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel, ETH_RSS_UDP,
> + IBV_RX_HASH_SRC_PORT_UDP |
> + IBV_RX_HASH_DST_PORT_UDP);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L4_UDP :
> +
> MLX5_FLOW_LAYER_OUTER_L4_UDP;
> + break;
> + case RTE_FLOW_ITEM_TYPE_GRE:
> + flow_dv_translate_item_gre(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_GRE;
> + break;
> + case RTE_FLOW_ITEM_TYPE_NVGRE:
> + flow_dv_translate_item_nvgre(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_GRE;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VXLAN:
> + flow_dv_translate_item_vxlan(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_VXLAN;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> + flow_dv_translate_item_vxlan(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
> + break;
> + case RTE_FLOW_ITEM_TYPE_META:
> + flow_dv_translate_item_meta(match_mask,
> match_value,
> + items);
> + item_flags |= MLX5_FLOW_ITEM_METADATA;
> + break;
> + default:
> + break;
> + }
> }
> + dev_flow->layers = item_flags;
> + /* Register matcher. */
> matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf,
> - matcher.mask.size);
> - if (priority == MLX5_FLOW_PRIO_RSVD)
> - priority = priv->config.flow_prio - 1;
> + matcher.mask.size);
> matcher.priority = mlx5_flow_adjust_priority(dev, priority,
> matcher.priority);
> matcher.egress = attr->egress;
> if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
> return -rte_errno;
> - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++)
> - if (flow_dv_create_action(dev, actions, dev_flow, attr, error))
> - return -rte_errno;
> + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> + const struct rte_flow_action_queue *queue;
> + const struct rte_flow_action_rss *rss;
> + const struct rte_flow_action *action = actions;
> +
> + switch (actions->type) {
> + case RTE_FLOW_ACTION_TYPE_VOID:
> + break;
> + case RTE_FLOW_ACTION_TYPE_FLAG:
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_TAG;
> + dev_flow->dv.actions[actions_n].tag_value =
> +
> mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
> + actions_n++;
> + action_flags |= MLX5_FLOW_ACTION_FLAG;
> + break;
> + case RTE_FLOW_ACTION_TYPE_MARK:
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_TAG;
> + dev_flow->dv.actions[actions_n].tag_value =
> + mlx5_flow_mark_set
> + (((const struct rte_flow_action_mark *)
> + (actions->conf))->id);
> + actions_n++;
> + action_flags |= MLX5_FLOW_ACTION_MARK;
> + break;
> + case RTE_FLOW_ACTION_TYPE_DROP:
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_DROP;
> + action_flags |= MLX5_FLOW_ACTION_DROP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_QUEUE:
> + queue = actions->conf;
> + flow->rss.queue_num = 1;
> + (*flow->queue)[0] = queue->index;
> + action_flags |= MLX5_FLOW_ACTION_QUEUE;
> + break;
> + case RTE_FLOW_ACTION_TYPE_RSS:
> + rss = actions->conf;
> + if (flow->queue)
> + memcpy((*flow->queue), rss->queue,
> + rss->queue_num * sizeof(uint16_t));
> + flow->rss.queue_num = rss->queue_num;
> + memcpy(flow->key, rss->key,
> MLX5_RSS_HASH_KEY_LEN);
> + flow->rss.types = rss->types;
> + flow->rss.level = rss->level;
> + action_flags |= MLX5_FLOW_ACTION_RSS;
> + break;
> + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
> + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
> + if (flow_dv_create_action_l2_encap(dev, actions,
> + dev_flow, error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap->verbs_action;
> + actions_n++;
> + action_flags |= actions->type ==
> +
> RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
> + MLX5_FLOW_ACTION_VXLAN_ENCAP :
> + MLX5_FLOW_ACTION_NVGRE_ENCAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
> + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
> + if (flow_dv_create_action_l2_decap(dev, dev_flow,
> + error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap->verbs_action;
> + actions_n++;
> + action_flags |= actions->type ==
> +
> RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
> + MLX5_FLOW_ACTION_VXLAN_DECAP :
> + MLX5_FLOW_ACTION_NVGRE_DECAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
> + /* Handle encap with preceding decap. */
> + if (action_flags & MLX5_FLOW_ACTION_RAW_DECAP)
> {
> + if (flow_dv_create_action_raw_encap
> + (dev, actions, dev_flow, attr, error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> +
> MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap-
> >verbs_action;
> + } else {
> + /* Handle encap without preceding decap. */
> + if (flow_dv_create_action_l2_encap(dev,
> actions,
> + dev_flow,
> + error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> +
> MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap-
> >verbs_action;
> + }
> + actions_n++;
> + action_flags |= MLX5_FLOW_ACTION_RAW_ENCAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
> + /* Check if this decap is followed by encap. */
> + for (; action->type != RTE_FLOW_ACTION_TYPE_END
> &&
> + action->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
> + action++) {
> + }
> + /* Handle decap only if it isn't followed by encap. */
> + if (action->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
> + if (flow_dv_create_action_l2_decap(dev,
> + dev_flow,
> + error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> +
> MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap-
> >verbs_action;
> + actions_n++;
> + }
> + /* If decap is followed by encap, handle it at encap. */
> + action_flags |= MLX5_FLOW_ACTION_RAW_DECAP;
> + break;
> + default:
> + break;
> + }
> + }
> + dev_flow->dv.actions_n = actions_n;
> + flow->actions = action_flags;
> return 0;
> }
>
> --
> 2.11.0
Best,
Ori
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation
2018-11-02 21:08 ` [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
@ 2018-11-04 8:29 ` Ori Kam
2018-11-05 5:39 ` Yongseok Koh
0 siblings, 1 reply; 17+ messages in thread
From: Ori Kam @ 2018-11-04 8:29 UTC (permalink / raw)
To: Yongseok Koh, Shahaf Shuler; +Cc: dev, Yongseok Koh, Ori Kam
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Yongseok Koh
> Sent: Friday, November 2, 2018 11:08 PM
> To: Shahaf Shuler <shahafs@mellanox.com>
> Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>
> Subject: [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow
> preparation
>
> Even though flow_drv_prepare() takes item_flags and action_flags to be
> filled in, those are not used and will be overwritten by parsing of
> flow_drv_translate(). There's no reason to keep the flags and fill it.
> Appropriate notes are added to the documentation of flow_drv_prepare() and
> flow_drv_translate().
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_flow.c | 38 ++++++++++++--------------
> drivers/net/mlx5/mlx5_flow.h | 3 +--
> drivers/net/mlx5/mlx5_flow_dv.c | 6 -----
> drivers/net/mlx5/mlx5_flow_tcf.c | 55 +++++---------------------------------
> drivers/net/mlx5/mlx5_flow_verbs.c | 52 +++--------------------------------
> 5 files changed, 29 insertions(+), 125 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index 107a4f02f8..fae3bc92dd 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -1657,8 +1657,6 @@ static struct mlx5_flow *
> flow_null_prepare(const struct rte_flow_attr *attr __rte_unused,
> const struct rte_flow_item items[] __rte_unused,
> const struct rte_flow_action actions[] __rte_unused,
> - uint64_t *item_flags __rte_unused,
> - uint64_t *action_flags __rte_unused,
> struct rte_flow_error *error __rte_unused)
> {
> rte_errno = ENOTSUP;
> @@ -1786,16 +1784,19 @@ flow_drv_validate(struct rte_eth_dev *dev,
> * calculates the size of memory required for device flow, allocates the
> memory,
> * initializes the device flow and returns the pointer.
> *
> + * @note
> + * This function initializes device flow structure such as dv, tcf or verbs in
> + * struct mlx5_flow. However, it is callee's responsibility to initialize the
> + * rest. For example, adding returning device flow to flow->dev_flow list and
> + * setting backward reference to the flow should be done out of this function.
> + * layers field is not filled either.
> + *
> * @param[in] attr
> * Pointer to the flow attributes.
> * @param[in] items
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1803,12 +1804,10 @@ flow_drv_validate(struct rte_eth_dev *dev,
> * Pointer to device flow on success, otherwise NULL and rte_ernno is set.
> */
> static inline struct mlx5_flow *
> -flow_drv_prepare(struct rte_flow *flow,
> +flow_drv_prepare(const struct rte_flow *flow,
> const struct rte_flow_attr *attr,
> const struct rte_flow_item items[],
> const struct rte_flow_action actions[],
> - uint64_t *item_flags,
> - uint64_t *action_flags,
> struct rte_flow_error *error)
> {
> const struct mlx5_flow_driver_ops *fops;
> @@ -1816,8 +1815,7 @@ flow_drv_prepare(struct rte_flow *flow,
>
> assert(type > MLX5_FLOW_TYPE_MIN && type <
> MLX5_FLOW_TYPE_MAX);
> fops = flow_get_drv_ops(type);
> - return fops->prepare(attr, items, actions, item_flags, action_flags,
> - error);
> + return fops->prepare(attr, items, actions, error);
> }
>
> /**
> @@ -1826,6 +1824,12 @@ flow_drv_prepare(struct rte_flow *flow,
> * translates a generic flow into a driver flow. flow_drv_prepare() must
> * precede.
> *
> + * @note
> + * dev_flow->layers could be filled as a result of parsing during translation
> + * if needed by flow_drv_apply(). dev_flow->flow->actions can also be filled
> + * if necessary. As a flow can have multiple dev_flows by RSS flow expansion,
> + * flow->actions could be overwritten even though all the expanded
> dev_flows
> + * have the same actions.
> *
> * @param[in] dev
> * Pointer to the rte dev structure.
> @@ -1889,7 +1893,7 @@ flow_drv_apply(struct rte_eth_dev *dev, struct
> rte_flow *flow,
> * Flow driver remove API. This abstracts calling driver specific functions.
> * Parent flow (rte_flow) should have driver type (drv_type). It removes a flow
> * on device. All the resources of the flow should be freed by calling
> - * flow_dv_destroy().
> + * flow_drv_destroy().
> *
> * @param[in] dev
> * Pointer to Ethernet device.
> @@ -2020,8 +2024,6 @@ flow_list_create(struct rte_eth_dev *dev, struct
> mlx5_flows *list,
> {
> struct rte_flow *flow = NULL;
> struct mlx5_flow *dev_flow;
> - uint64_t action_flags = 0;
> - uint64_t item_flags = 0;
> const struct rte_flow_action_rss *rss;
> union {
> struct rte_flow_expand_rss buf;
> @@ -2064,16 +2066,10 @@ flow_list_create(struct rte_eth_dev *dev, struct
> mlx5_flows *list,
> }
> for (i = 0; i < buf->entries; ++i) {
> dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
> - actions, &item_flags, &action_flags,
> - error);
> + actions, error);
> if (!dev_flow)
> goto error;
> dev_flow->flow = flow;
> - dev_flow->layers = item_flags;
> - /* Store actions once as expanded flows have same actions. */
> - if (i == 0)
> - flow->actions = action_flags;
> - assert(flow->actions == action_flags);
> LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
> ret = flow_drv_translate(dev, dev_flow, attr,
> buf->entry[i].pattern,
> diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
> index fadde552c2..f976bff427 100644
> --- a/drivers/net/mlx5/mlx5_flow.h
> +++ b/drivers/net/mlx5/mlx5_flow.h
> @@ -293,8 +293,7 @@ typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev
> *dev,
> struct rte_flow_error *error);
> typedef struct mlx5_flow *(*mlx5_flow_prepare_t)
> (const struct rte_flow_attr *attr, const struct rte_flow_item items[],
> - const struct rte_flow_action actions[], uint64_t *item_flags,
> - uint64_t *action_flags, struct rte_flow_error *error);
> + const struct rte_flow_action actions[], struct rte_flow_error *error);
> typedef int (*mlx5_flow_translate_t)(struct rte_eth_dev *dev,
> struct mlx5_flow *dev_flow,
> const struct rte_flow_attr *attr,
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c
> index 44e2a920eb..0fb791eafa 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -1014,10 +1014,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const
> struct rte_flow_attr *attr,
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1029,8 +1025,6 @@ static struct mlx5_flow *
> flow_dv_prepare(const struct rte_flow_attr *attr __rte_unused,
> const struct rte_flow_item items[] __rte_unused,
> const struct rte_flow_action actions[] __rte_unused,
> - uint64_t *item_flags __rte_unused,
> - uint64_t *action_flags __rte_unused,
> struct rte_flow_error *error)
> {
> uint32_t size = sizeof(struct mlx5_flow);
> diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c
> b/drivers/net/mlx5/mlx5_flow_tcf.c
> index 719fb10632..483e490843 100644
> --- a/drivers/net/mlx5/mlx5_flow_tcf.c
> +++ b/drivers/net/mlx5/mlx5_flow_tcf.c
> @@ -664,20 +664,15 @@ flow_tcf_create_pedit_mnl_msg(struct nlmsghdr
> *nl,
> *
> * @param[in,out] actions
> * actions specification.
> - * @param[in,out] action_flags
> - * actions flags
> - * @param[in,out] size
> - * accumulated size
> + *
> * @return
> * Max memory size of one TC-pedit action
> */
> static int
> -flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions,
> - uint64_t *action_flags)
> +flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions)
> {
> int pedit_size = 0;
> int keys = 0;
> - uint64_t flags = 0;
>
> pedit_size += SZ_NLATTR_NEST + /* na_act_index. */
> SZ_NLATTR_STRZ_OF("pedit") +
> @@ -686,45 +681,35 @@ flow_tcf_get_pedit_actions_size(const struct
> rte_flow_action **actions,
> switch ((*actions)->type) {
> case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
> keys += NUM_OF_PEDIT_KEYS(IPV4_ADDR_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_IPV4_SRC;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST:
> keys += NUM_OF_PEDIT_KEYS(IPV4_ADDR_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_IPV4_DST;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC:
> keys += NUM_OF_PEDIT_KEYS(IPV6_ADDR_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_IPV6_SRC;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST:
> keys += NUM_OF_PEDIT_KEYS(IPV6_ADDR_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_IPV6_DST;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
> /* TCP is as same as UDP */
> keys += NUM_OF_PEDIT_KEYS(TP_PORT_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_TP_SRC;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
> /* TCP is as same as UDP */
> keys += NUM_OF_PEDIT_KEYS(TP_PORT_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_TP_DST;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_TTL:
> keys += NUM_OF_PEDIT_KEYS(TTL_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_TTL;
> break;
> case RTE_FLOW_ACTION_TYPE_DEC_TTL:
> keys += NUM_OF_PEDIT_KEYS(TTL_LEN);
> - flags |= MLX5_FLOW_ACTION_DEC_TTL;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
> keys += NUM_OF_PEDIT_KEYS(ETHER_ADDR_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_MAC_SRC;
> break;
> case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
> keys += NUM_OF_PEDIT_KEYS(ETHER_ADDR_LEN);
> - flags |= MLX5_FLOW_ACTION_SET_MAC_DST;
> break;
> default:
> goto get_pedit_action_size_done;
> @@ -740,7 +725,6 @@ flow_tcf_get_pedit_actions_size(const struct
> rte_flow_action **actions,
> /* TCA_PEDIT_KEY_EX + HTYPE + CMD */
> (SZ_NLATTR_NEST + SZ_NLATTR_DATA_OF(2) +
> SZ_NLATTR_DATA_OF(2));
> - (*action_flags) |= flags;
> (*actions)--;
> return pedit_size;
> }
> @@ -1415,11 +1399,9 @@ flow_tcf_validate(struct rte_eth_dev *dev,
> */
> static int
> flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
> - const struct rte_flow_item items[],
> - uint64_t *item_flags)
> + const struct rte_flow_item items[])
> {
> int size = 0;
> - uint64_t flags = 0;
>
> size += SZ_NLATTR_STRZ_OF("flower") +
> SZ_NLATTR_NEST + /* TCA_OPTIONS. */
> @@ -1436,7 +1418,6 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> SZ_NLATTR_DATA_OF(ETHER_ADDR_LEN) * 4;
> /* dst/src MAC addr and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L2;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> @@ -1444,33 +1425,28 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> /* VLAN Ether type. */
> SZ_NLATTR_TYPE_OF(uint8_t) + /* VLAN prio.
> */
> SZ_NLATTR_TYPE_OF(uint16_t); /* VLAN ID. */
> - flags |= MLX5_FLOW_LAYER_OUTER_VLAN;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(uint32_t) * 4;
> /* dst/src IP addr and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV6:
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(IPV6_ADDR_LEN) * 4;
> /* dst/src IP addr and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> break;
> case RTE_FLOW_ITEM_TYPE_UDP:
> size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(uint16_t) * 4;
> /* dst/src port and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP;
> break;
> case RTE_FLOW_ITEM_TYPE_TCP:
> size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(uint16_t) * 4;
> /* dst/src port and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L4_TCP;
> break;
> default:
> DRV_LOG(WARNING,
> @@ -1480,7 +1456,6 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> break;
> }
> }
> - *item_flags = flags;
> return size;
> }
>
> @@ -1497,11 +1472,9 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> * Maximum size of memory for actions.
> */
> static int
> -flow_tcf_get_actions_and_size(const struct rte_flow_action actions[],
> - uint64_t *action_flags)
> +flow_tcf_get_actions_and_size(const struct rte_flow_action actions[])
> {
> int size = 0;
> - uint64_t flags = 0;
>
> size += SZ_NLATTR_NEST; /* TCA_FLOWER_ACT. */
> for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> @@ -1513,35 +1486,28 @@ flow_tcf_get_actions_and_size(const struct
> rte_flow_action actions[],
> SZ_NLATTR_STRZ_OF("mirred") +
> SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
> SZ_NLATTR_TYPE_OF(struct tc_mirred);
> - flags |= MLX5_FLOW_ACTION_PORT_ID;
> break;
> case RTE_FLOW_ACTION_TYPE_JUMP:
> size += SZ_NLATTR_NEST + /* na_act_index. */
> SZ_NLATTR_STRZ_OF("gact") +
> SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
> SZ_NLATTR_TYPE_OF(struct tc_gact);
> - flags |= MLX5_FLOW_ACTION_JUMP;
> break;
> case RTE_FLOW_ACTION_TYPE_DROP:
> size += SZ_NLATTR_NEST + /* na_act_index. */
> SZ_NLATTR_STRZ_OF("gact") +
> SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
> SZ_NLATTR_TYPE_OF(struct tc_gact);
> - flags |= MLX5_FLOW_ACTION_DROP;
> break;
> case RTE_FLOW_ACTION_TYPE_COUNT:
> break;
> case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
> - flags |= MLX5_FLOW_ACTION_OF_POP_VLAN;
> goto action_of_vlan;
> case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
> - flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
> goto action_of_vlan;
> case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
> - flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID;
> goto action_of_vlan;
> case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
> - flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_PCP;
> goto action_of_vlan;
> action_of_vlan:
> size += SZ_NLATTR_NEST + /* na_act_index. */
> @@ -1563,8 +1529,7 @@ flow_tcf_get_actions_and_size(const struct
> rte_flow_action actions[],
> case RTE_FLOW_ACTION_TYPE_DEC_TTL:
> case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
> case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
> - size += flow_tcf_get_pedit_actions_size(&actions,
> - &flags);
> + size += flow_tcf_get_pedit_actions_size(&actions);
> break;
> default:
> DRV_LOG(WARNING,
> @@ -1574,7 +1539,6 @@ flow_tcf_get_actions_and_size(const struct
> rte_flow_action actions[],
> break;
> }
> }
> - *action_flags = flags;
> return size;
> }
>
> @@ -1610,10 +1574,6 @@ flow_tcf_nl_brand(struct nlmsghdr *nlh, uint32_t
> handle)
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1625,7 +1585,6 @@ static struct mlx5_flow *
> flow_tcf_prepare(const struct rte_flow_attr *attr,
> const struct rte_flow_item items[],
> const struct rte_flow_action actions[],
> - uint64_t *item_flags, uint64_t *action_flags,
> struct rte_flow_error *error)
> {
> size_t size = sizeof(struct mlx5_flow) +
> @@ -1635,8 +1594,8 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
> struct nlmsghdr *nlh;
> struct tcmsg *tcm;
>
> - size += flow_tcf_get_items_and_size(attr, items, item_flags);
> - size += flow_tcf_get_actions_and_size(actions, action_flags);
> + size += flow_tcf_get_items_and_size(attr, items);
> + size += flow_tcf_get_actions_and_size(actions);
> dev_flow = rte_zmalloc(__func__, size, MNL_ALIGNTO);
> if (!dev_flow) {
> rte_flow_error_set(error, ENOMEM,
> diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c
> b/drivers/net/mlx5/mlx5_flow_verbs.c
> index ab58c04db5..453c89e347 100644
> --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> @@ -1217,11 +1217,9 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> * The size of the memory needed for all actions.
> */
> static int
> -flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
> - uint64_t *action_flags)
> +flow_verbs_get_actions_and_size(const struct rte_flow_action actions[])
> {
> int size = 0;
> - uint64_t detected_actions = 0;
>
> for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> switch (actions->type) {
> @@ -1229,34 +1227,27 @@ flow_verbs_get_actions_and_size(const struct
> rte_flow_action actions[],
> break;
> case RTE_FLOW_ACTION_TYPE_FLAG:
> size += sizeof(struct ibv_flow_spec_action_tag);
> - detected_actions |= MLX5_FLOW_ACTION_FLAG;
> break;
> case RTE_FLOW_ACTION_TYPE_MARK:
> size += sizeof(struct ibv_flow_spec_action_tag);
> - detected_actions |= MLX5_FLOW_ACTION_MARK;
> break;
> case RTE_FLOW_ACTION_TYPE_DROP:
> size += sizeof(struct ibv_flow_spec_action_drop);
> - detected_actions |= MLX5_FLOW_ACTION_DROP;
> break;
> case RTE_FLOW_ACTION_TYPE_QUEUE:
> - detected_actions |= MLX5_FLOW_ACTION_QUEUE;
> break;
> case RTE_FLOW_ACTION_TYPE_RSS:
> - detected_actions |= MLX5_FLOW_ACTION_RSS;
> break;
> case RTE_FLOW_ACTION_TYPE_COUNT:
> #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
> defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
> size += sizeof(struct ibv_flow_spec_counter_action);
> #endif
> - detected_actions |= MLX5_FLOW_ACTION_COUNT;
> break;
> default:
> break;
> }
> }
> - *action_flags = detected_actions;
> return size;
> }
>
> @@ -1274,83 +1265,54 @@ flow_verbs_get_actions_and_size(const struct
> rte_flow_action actions[],
> * The size of the memory needed for all items.
> */
> static int
> -flow_verbs_get_items_and_size(const struct rte_flow_item items[],
> - uint64_t *item_flags)
> +flow_verbs_get_items_and_size(const struct rte_flow_item items[])
> {
> int size = 0;
> - uint64_t detected_items = 0;
>
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> - int tunnel = !!(detected_items &
> MLX5_FLOW_LAYER_TUNNEL);
> -
> switch (items->type) {
> case RTE_FLOW_ITEM_TYPE_VOID:
> break;
> case RTE_FLOW_ITEM_TYPE_ETH:
> size += sizeof(struct ibv_flow_spec_eth);
> - detected_items |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L2 :
> -
> MLX5_FLOW_LAYER_OUTER_L2;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> size += sizeof(struct ibv_flow_spec_eth);
> - detected_items |=
> - tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
> - MLX5_FLOW_LAYER_INNER_VLAN) :
> - (MLX5_FLOW_LAYER_OUTER_L2 |
> - MLX5_FLOW_LAYER_OUTER_VLAN);
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> size += sizeof(struct ibv_flow_spec_ipv4_ext);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L3_IPV4
> :
> -
> MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV6:
> size += sizeof(struct ibv_flow_spec_ipv6);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L3_IPV6
> :
> -
> MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> break;
> case RTE_FLOW_ITEM_TYPE_UDP:
> size += sizeof(struct ibv_flow_spec_tcp_udp);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L4_UDP
> :
> -
> MLX5_FLOW_LAYER_OUTER_L4_UDP;
> break;
> case RTE_FLOW_ITEM_TYPE_TCP:
> size += sizeof(struct ibv_flow_spec_tcp_udp);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L4_TCP :
> -
> MLX5_FLOW_LAYER_OUTER_L4_TCP;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN:
> size += sizeof(struct ibv_flow_spec_tunnel);
> - detected_items |= MLX5_FLOW_LAYER_VXLAN;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> size += sizeof(struct ibv_flow_spec_tunnel);
> - detected_items |= MLX5_FLOW_LAYER_VXLAN_GPE;
> break;
> #ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
> case RTE_FLOW_ITEM_TYPE_GRE:
> size += sizeof(struct ibv_flow_spec_gre);
> - detected_items |= MLX5_FLOW_LAYER_GRE;
> break;
> case RTE_FLOW_ITEM_TYPE_MPLS:
> size += sizeof(struct ibv_flow_spec_mpls);
> - detected_items |= MLX5_FLOW_LAYER_MPLS;
> break;
> #else
> case RTE_FLOW_ITEM_TYPE_GRE:
> size += sizeof(struct ibv_flow_spec_tunnel);
> - detected_items |= MLX5_FLOW_LAYER_TUNNEL;
> break;
> #endif
> default:
> break;
> }
> }
> - *item_flags = detected_items;
> return size;
> }
>
> @@ -1365,10 +1327,6 @@ flow_verbs_get_items_and_size(const struct
> rte_flow_item items[],
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1380,15 +1338,13 @@ static struct mlx5_flow *
> flow_verbs_prepare(const struct rte_flow_attr *attr __rte_unused,
> const struct rte_flow_item items[],
> const struct rte_flow_action actions[],
> - uint64_t *item_flags,
> - uint64_t *action_flags,
> struct rte_flow_error *error)
> {
> uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
> struct mlx5_flow *flow;
>
> - size += flow_verbs_get_actions_and_size(actions, action_flags);
> - size += flow_verbs_get_items_and_size(items, item_flags);
> + size += flow_verbs_get_actions_and_size(actions);
I think the function name should be changed since it only returns the size.
> + size += flow_verbs_get_items_and_size(items);
I think the function name should be changed since it only returns the size.
> flow = rte_calloc(__func__, 1, size, 0);
> if (!flow) {
> rte_flow_error_set(error, ENOMEM,
> --
> 2.11.0
Best,
Ori Kam
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-04 8:22 ` Ori Kam
@ 2018-11-05 5:37 ` Yongseok Koh
2018-11-05 6:08 ` Ori Kam
0 siblings, 1 reply; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 5:37 UTC (permalink / raw)
To: Ori Kam; +Cc: Shahaf Shuler, dev
On Sun, Nov 04, 2018 at 01:22:34AM -0700, Ori Kam wrote:
>
> > -----Original Message-----
> > From: Yongseok Koh
> > Sent: Friday, November 2, 2018 11:08 PM
> > To: Shahaf Shuler <shahafs@mellanox.com>
> > Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>; Ori Kam
> > <orika@mellanox.com>
> > Subject: [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
> >
> > 1) Fix layer parsing
> > In translation of tunneled flows, dev_flow->layers must not be used to
> > check tunneled layer as it contains all the layers parsed from
> > flow_drv_prepare(). Checking tunneled layer is needed to distinguish
> > between outer and inner item. This should be based on dynamic parsing. With
> > dev_flow->layers on a tunneled flow, items will always be interpreted as
> > inner as dev_flow->layer already has all the items. Dynamic parsing
> > (item_flags) is added as there's no such code.
> >
> > 2) Refactoring code
> > - flow_dv_create_item() and flow_dv_create_action() are merged into
> > flow_dv_translate() for consistency with Verbs and *_validate().
>
> I don't like the idea of combining 2 distinct functions into one.
> I think a function should be as short as possible and do only one thing,
> if there is no good reason why two functions should be combined they should not
> be combined.
> If you want to align both the Direct Verbs and Verbs I think we can split the Verbs
> code.
Look at the other lengthy switch-case clauses in validate/prepare/translate in
each driver. This DV translate is the only exception. I'd rather like to ask
why. I didn't like the lengthy function from the beginning but you wanted to
keep it. Of course, I considered to split the Verbs one but that's the reason
why I chose to merge DV code. If we feel this lengthy func is really complex and
gets error prone, then we can refactor all the code at once later. Or, I still
prefer the graph approach. That would be simpler.
Thanks,
Yongseok
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation
2018-11-04 8:29 ` Ori Kam
@ 2018-11-05 5:39 ` Yongseok Koh
0 siblings, 0 replies; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 5:39 UTC (permalink / raw)
To: Ori Kam; +Cc: Shahaf Shuler, dev
On Sun, Nov 04, 2018 at 01:29:13AM -0700, Ori Kam wrote:
>
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Yongseok Koh
> > Sent: Friday, November 2, 2018 11:08 PM
> > To: Shahaf Shuler <shahafs@mellanox.com>
> > Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>
> > Subject: [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow
> > preparation
> >
> > Even though flow_drv_prepare() takes item_flags and action_flags to be
> > filled in, those are not used and will be overwritten by parsing of
> > flow_drv_translate(). There's no reason to keep the flags and fill it.
> > Appropriate notes are added to the documentation of flow_drv_prepare() and
> > flow_drv_translate().
> >
> > Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> > ---
> > drivers/net/mlx5/mlx5_flow.c | 38 ++++++++++++--------------
> > drivers/net/mlx5/mlx5_flow.h | 3 +--
> > drivers/net/mlx5/mlx5_flow_dv.c | 6 -----
> > drivers/net/mlx5/mlx5_flow_tcf.c | 55 +++++---------------------------------
> > drivers/net/mlx5/mlx5_flow_verbs.c | 52 +++--------------------------------
> > 5 files changed, 29 insertions(+), 125 deletions(-)
> >
> > diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> > index 107a4f02f8..fae3bc92dd 100644
> > --- a/drivers/net/mlx5/mlx5_flow.c
> > +++ b/drivers/net/mlx5/mlx5_flow.c
> > @@ -1657,8 +1657,6 @@ static struct mlx5_flow *
> > flow_null_prepare(const struct rte_flow_attr *attr __rte_unused,
> > const struct rte_flow_item items[] __rte_unused,
> > const struct rte_flow_action actions[] __rte_unused,
> > - uint64_t *item_flags __rte_unused,
> > - uint64_t *action_flags __rte_unused,
> > struct rte_flow_error *error __rte_unused)
> > {
> > rte_errno = ENOTSUP;
> > @@ -1786,16 +1784,19 @@ flow_drv_validate(struct rte_eth_dev *dev,
> > * calculates the size of memory required for device flow, allocates the
> > memory,
> > * initializes the device flow and returns the pointer.
> > *
> > + * @note
> > + * This function initializes device flow structure such as dv, tcf or verbs in
> > + * struct mlx5_flow. However, it is callee's responsibility to initialize the
> > + * rest. For example, adding returning device flow to flow->dev_flow list and
> > + * setting backward reference to the flow should be done out of this function.
> > + * layers field is not filled either.
> > + *
> > * @param[in] attr
> > * Pointer to the flow attributes.
> > * @param[in] items
> > * Pointer to the list of items.
> > * @param[in] actions
> > * Pointer to the list of actions.
> > - * @param[out] item_flags
> > - * Pointer to bit mask of all items detected.
> > - * @param[out] action_flags
> > - * Pointer to bit mask of all actions detected.
> > * @param[out] error
> > * Pointer to the error structure.
> > *
> > @@ -1803,12 +1804,10 @@ flow_drv_validate(struct rte_eth_dev *dev,
> > * Pointer to device flow on success, otherwise NULL and rte_ernno is set.
> > */
> > static inline struct mlx5_flow *
> > -flow_drv_prepare(struct rte_flow *flow,
> > +flow_drv_prepare(const struct rte_flow *flow,
> > const struct rte_flow_attr *attr,
> > const struct rte_flow_item items[],
> > const struct rte_flow_action actions[],
> > - uint64_t *item_flags,
> > - uint64_t *action_flags,
> > struct rte_flow_error *error)
> > {
> > const struct mlx5_flow_driver_ops *fops;
> > @@ -1816,8 +1815,7 @@ flow_drv_prepare(struct rte_flow *flow,
> >
> > assert(type > MLX5_FLOW_TYPE_MIN && type <
> > MLX5_FLOW_TYPE_MAX);
> > fops = flow_get_drv_ops(type);
> > - return fops->prepare(attr, items, actions, item_flags, action_flags,
> > - error);
> > + return fops->prepare(attr, items, actions, error);
> > }
> >
> > /**
> > @@ -1826,6 +1824,12 @@ flow_drv_prepare(struct rte_flow *flow,
> > * translates a generic flow into a driver flow. flow_drv_prepare() must
> > * precede.
> > *
> > + * @note
> > + * dev_flow->layers could be filled as a result of parsing during translation
> > + * if needed by flow_drv_apply(). dev_flow->flow->actions can also be filled
> > + * if necessary. As a flow can have multiple dev_flows by RSS flow expansion,
> > + * flow->actions could be overwritten even though all the expanded
> > dev_flows
> > + * have the same actions.
> > *
> > * @param[in] dev
> > * Pointer to the rte dev structure.
> > @@ -1889,7 +1893,7 @@ flow_drv_apply(struct rte_eth_dev *dev, struct
> > rte_flow *flow,
> > * Flow driver remove API. This abstracts calling driver specific functions.
> > * Parent flow (rte_flow) should have driver type (drv_type). It removes a flow
> > * on device. All the resources of the flow should be freed by calling
> > - * flow_dv_destroy().
> > + * flow_drv_destroy().
> > *
> > * @param[in] dev
> > * Pointer to Ethernet device.
> > @@ -2020,8 +2024,6 @@ flow_list_create(struct rte_eth_dev *dev, struct
> > mlx5_flows *list,
> > {
> > struct rte_flow *flow = NULL;
> > struct mlx5_flow *dev_flow;
> > - uint64_t action_flags = 0;
> > - uint64_t item_flags = 0;
> > const struct rte_flow_action_rss *rss;
> > union {
> > struct rte_flow_expand_rss buf;
> > @@ -2064,16 +2066,10 @@ flow_list_create(struct rte_eth_dev *dev, struct
> > mlx5_flows *list,
> > }
> > for (i = 0; i < buf->entries; ++i) {
> > dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
> > - actions, &item_flags, &action_flags,
> > - error);
> > + actions, error);
> > if (!dev_flow)
> > goto error;
> > dev_flow->flow = flow;
> > - dev_flow->layers = item_flags;
> > - /* Store actions once as expanded flows have same actions. */
> > - if (i == 0)
> > - flow->actions = action_flags;
> > - assert(flow->actions == action_flags);
> > LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
> > ret = flow_drv_translate(dev, dev_flow, attr,
> > buf->entry[i].pattern,
> > diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
> > index fadde552c2..f976bff427 100644
> > --- a/drivers/net/mlx5/mlx5_flow.h
> > +++ b/drivers/net/mlx5/mlx5_flow.h
> > @@ -293,8 +293,7 @@ typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev
> > *dev,
> > struct rte_flow_error *error);
> > typedef struct mlx5_flow *(*mlx5_flow_prepare_t)
> > (const struct rte_flow_attr *attr, const struct rte_flow_item items[],
> > - const struct rte_flow_action actions[], uint64_t *item_flags,
> > - uint64_t *action_flags, struct rte_flow_error *error);
> > + const struct rte_flow_action actions[], struct rte_flow_error *error);
> > typedef int (*mlx5_flow_translate_t)(struct rte_eth_dev *dev,
> > struct mlx5_flow *dev_flow,
> > const struct rte_flow_attr *attr,
> > diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> > b/drivers/net/mlx5/mlx5_flow_dv.c
> > index 44e2a920eb..0fb791eafa 100644
> > --- a/drivers/net/mlx5/mlx5_flow_dv.c
> > +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> > @@ -1014,10 +1014,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const
> > struct rte_flow_attr *attr,
> > * Pointer to the list of items.
> > * @param[in] actions
> > * Pointer to the list of actions.
> > - * @param[out] item_flags
> > - * Pointer to bit mask of all items detected.
> > - * @param[out] action_flags
> > - * Pointer to bit mask of all actions detected.
> > * @param[out] error
> > * Pointer to the error structure.
> > *
> > @@ -1029,8 +1025,6 @@ static struct mlx5_flow *
> > flow_dv_prepare(const struct rte_flow_attr *attr __rte_unused,
> > const struct rte_flow_item items[] __rte_unused,
> > const struct rte_flow_action actions[] __rte_unused,
> > - uint64_t *item_flags __rte_unused,
> > - uint64_t *action_flags __rte_unused,
> > struct rte_flow_error *error)
> > {
> > uint32_t size = sizeof(struct mlx5_flow);
> > diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c
> > b/drivers/net/mlx5/mlx5_flow_tcf.c
> > index 719fb10632..483e490843 100644
> > --- a/drivers/net/mlx5/mlx5_flow_tcf.c
> > +++ b/drivers/net/mlx5/mlx5_flow_tcf.c
> > @@ -664,20 +664,15 @@ flow_tcf_create_pedit_mnl_msg(struct nlmsghdr
> > *nl,
> > *
> > * @param[in,out] actions
> > * actions specification.
> > - * @param[in,out] action_flags
> > - * actions flags
> > - * @param[in,out] size
> > - * accumulated size
> > + *
> > * @return
> > * Max memory size of one TC-pedit action
> > */
> > static int
> > -flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions,
> > - uint64_t *action_flags)
> > +flow_tcf_get_pedit_actions_size(const struct rte_flow_action **actions)
> > {
> > int pedit_size = 0;
> > int keys = 0;
> > - uint64_t flags = 0;
> >
> > pedit_size += SZ_NLATTR_NEST + /* na_act_index. */
> > SZ_NLATTR_STRZ_OF("pedit") +
> > @@ -686,45 +681,35 @@ flow_tcf_get_pedit_actions_size(const struct
> > rte_flow_action **actions,
> > switch ((*actions)->type) {
> > case RTE_FLOW_ACTION_TYPE_SET_IPV4_SRC:
> > keys += NUM_OF_PEDIT_KEYS(IPV4_ADDR_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_IPV4_SRC;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_IPV4_DST:
> > keys += NUM_OF_PEDIT_KEYS(IPV4_ADDR_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_IPV4_DST;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_IPV6_SRC:
> > keys += NUM_OF_PEDIT_KEYS(IPV6_ADDR_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_IPV6_SRC;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_IPV6_DST:
> > keys += NUM_OF_PEDIT_KEYS(IPV6_ADDR_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_IPV6_DST;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_TP_SRC:
> > /* TCP is as same as UDP */
> > keys += NUM_OF_PEDIT_KEYS(TP_PORT_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_TP_SRC;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_TP_DST:
> > /* TCP is as same as UDP */
> > keys += NUM_OF_PEDIT_KEYS(TP_PORT_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_TP_DST;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_TTL:
> > keys += NUM_OF_PEDIT_KEYS(TTL_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_TTL;
> > break;
> > case RTE_FLOW_ACTION_TYPE_DEC_TTL:
> > keys += NUM_OF_PEDIT_KEYS(TTL_LEN);
> > - flags |= MLX5_FLOW_ACTION_DEC_TTL;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
> > keys += NUM_OF_PEDIT_KEYS(ETHER_ADDR_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_MAC_SRC;
> > break;
> > case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
> > keys += NUM_OF_PEDIT_KEYS(ETHER_ADDR_LEN);
> > - flags |= MLX5_FLOW_ACTION_SET_MAC_DST;
> > break;
> > default:
> > goto get_pedit_action_size_done;
> > @@ -740,7 +725,6 @@ flow_tcf_get_pedit_actions_size(const struct
> > rte_flow_action **actions,
> > /* TCA_PEDIT_KEY_EX + HTYPE + CMD */
> > (SZ_NLATTR_NEST + SZ_NLATTR_DATA_OF(2) +
> > SZ_NLATTR_DATA_OF(2));
> > - (*action_flags) |= flags;
> > (*actions)--;
> > return pedit_size;
> > }
> > @@ -1415,11 +1399,9 @@ flow_tcf_validate(struct rte_eth_dev *dev,
> > */
> > static int
> > flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
> > - const struct rte_flow_item items[],
> > - uint64_t *item_flags)
> > + const struct rte_flow_item items[])
> > {
> > int size = 0;
> > - uint64_t flags = 0;
> >
> > size += SZ_NLATTR_STRZ_OF("flower") +
> > SZ_NLATTR_NEST + /* TCA_OPTIONS. */
> > @@ -1436,7 +1418,6 @@ flow_tcf_get_items_and_size(const struct
> > rte_flow_attr *attr,
> > size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> > */
> > SZ_NLATTR_DATA_OF(ETHER_ADDR_LEN) * 4;
> > /* dst/src MAC addr and mask. */
> > - flags |= MLX5_FLOW_LAYER_OUTER_L2;
> > break;
> > case RTE_FLOW_ITEM_TYPE_VLAN:
> > size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> > */
> > @@ -1444,33 +1425,28 @@ flow_tcf_get_items_and_size(const struct
> > rte_flow_attr *attr,
> > /* VLAN Ether type. */
> > SZ_NLATTR_TYPE_OF(uint8_t) + /* VLAN prio.
> > */
> > SZ_NLATTR_TYPE_OF(uint16_t); /* VLAN ID. */
> > - flags |= MLX5_FLOW_LAYER_OUTER_VLAN;
> > break;
> > case RTE_FLOW_ITEM_TYPE_IPV4:
> > size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> > */
> > SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> > SZ_NLATTR_TYPE_OF(uint32_t) * 4;
> > /* dst/src IP addr and mask. */
> > - flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> > break;
> > case RTE_FLOW_ITEM_TYPE_IPV6:
> > size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> > */
> > SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> > SZ_NLATTR_TYPE_OF(IPV6_ADDR_LEN) * 4;
> > /* dst/src IP addr and mask. */
> > - flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> > break;
> > case RTE_FLOW_ITEM_TYPE_UDP:
> > size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> > SZ_NLATTR_TYPE_OF(uint16_t) * 4;
> > /* dst/src port and mask. */
> > - flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP;
> > break;
> > case RTE_FLOW_ITEM_TYPE_TCP:
> > size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> > SZ_NLATTR_TYPE_OF(uint16_t) * 4;
> > /* dst/src port and mask. */
> > - flags |= MLX5_FLOW_LAYER_OUTER_L4_TCP;
> > break;
> > default:
> > DRV_LOG(WARNING,
> > @@ -1480,7 +1456,6 @@ flow_tcf_get_items_and_size(const struct
> > rte_flow_attr *attr,
> > break;
> > }
> > }
> > - *item_flags = flags;
> > return size;
> > }
> >
> > @@ -1497,11 +1472,9 @@ flow_tcf_get_items_and_size(const struct
> > rte_flow_attr *attr,
> > * Maximum size of memory for actions.
> > */
> > static int
> > -flow_tcf_get_actions_and_size(const struct rte_flow_action actions[],
> > - uint64_t *action_flags)
> > +flow_tcf_get_actions_and_size(const struct rte_flow_action actions[])
> > {
> > int size = 0;
> > - uint64_t flags = 0;
> >
> > size += SZ_NLATTR_NEST; /* TCA_FLOWER_ACT. */
> > for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> > @@ -1513,35 +1486,28 @@ flow_tcf_get_actions_and_size(const struct
> > rte_flow_action actions[],
> > SZ_NLATTR_STRZ_OF("mirred") +
> > SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
> > SZ_NLATTR_TYPE_OF(struct tc_mirred);
> > - flags |= MLX5_FLOW_ACTION_PORT_ID;
> > break;
> > case RTE_FLOW_ACTION_TYPE_JUMP:
> > size += SZ_NLATTR_NEST + /* na_act_index. */
> > SZ_NLATTR_STRZ_OF("gact") +
> > SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
> > SZ_NLATTR_TYPE_OF(struct tc_gact);
> > - flags |= MLX5_FLOW_ACTION_JUMP;
> > break;
> > case RTE_FLOW_ACTION_TYPE_DROP:
> > size += SZ_NLATTR_NEST + /* na_act_index. */
> > SZ_NLATTR_STRZ_OF("gact") +
> > SZ_NLATTR_NEST + /* TCA_ACT_OPTIONS. */
> > SZ_NLATTR_TYPE_OF(struct tc_gact);
> > - flags |= MLX5_FLOW_ACTION_DROP;
> > break;
> > case RTE_FLOW_ACTION_TYPE_COUNT:
> > break;
> > case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
> > - flags |= MLX5_FLOW_ACTION_OF_POP_VLAN;
> > goto action_of_vlan;
> > case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
> > - flags |= MLX5_FLOW_ACTION_OF_PUSH_VLAN;
> > goto action_of_vlan;
> > case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
> > - flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_VID;
> > goto action_of_vlan;
> > case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
> > - flags |= MLX5_FLOW_ACTION_OF_SET_VLAN_PCP;
> > goto action_of_vlan;
> > action_of_vlan:
> > size += SZ_NLATTR_NEST + /* na_act_index. */
> > @@ -1563,8 +1529,7 @@ flow_tcf_get_actions_and_size(const struct
> > rte_flow_action actions[],
> > case RTE_FLOW_ACTION_TYPE_DEC_TTL:
> > case RTE_FLOW_ACTION_TYPE_SET_MAC_SRC:
> > case RTE_FLOW_ACTION_TYPE_SET_MAC_DST:
> > - size += flow_tcf_get_pedit_actions_size(&actions,
> > - &flags);
> > + size += flow_tcf_get_pedit_actions_size(&actions);
> > break;
> > default:
> > DRV_LOG(WARNING,
> > @@ -1574,7 +1539,6 @@ flow_tcf_get_actions_and_size(const struct
> > rte_flow_action actions[],
> > break;
> > }
> > }
> > - *action_flags = flags;
> > return size;
> > }
> >
> > @@ -1610,10 +1574,6 @@ flow_tcf_nl_brand(struct nlmsghdr *nlh, uint32_t
> > handle)
> > * Pointer to the list of items.
> > * @param[in] actions
> > * Pointer to the list of actions.
> > - * @param[out] item_flags
> > - * Pointer to bit mask of all items detected.
> > - * @param[out] action_flags
> > - * Pointer to bit mask of all actions detected.
> > * @param[out] error
> > * Pointer to the error structure.
> > *
> > @@ -1625,7 +1585,6 @@ static struct mlx5_flow *
> > flow_tcf_prepare(const struct rte_flow_attr *attr,
> > const struct rte_flow_item items[],
> > const struct rte_flow_action actions[],
> > - uint64_t *item_flags, uint64_t *action_flags,
> > struct rte_flow_error *error)
> > {
> > size_t size = sizeof(struct mlx5_flow) +
> > @@ -1635,8 +1594,8 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
> > struct nlmsghdr *nlh;
> > struct tcmsg *tcm;
> >
> > - size += flow_tcf_get_items_and_size(attr, items, item_flags);
> > - size += flow_tcf_get_actions_and_size(actions, action_flags);
> > + size += flow_tcf_get_items_and_size(attr, items);
> > + size += flow_tcf_get_actions_and_size(actions);
> > dev_flow = rte_zmalloc(__func__, size, MNL_ALIGNTO);
> > if (!dev_flow) {
> > rte_flow_error_set(error, ENOMEM,
> > diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c
> > b/drivers/net/mlx5/mlx5_flow_verbs.c
> > index ab58c04db5..453c89e347 100644
> > --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> > +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> > @@ -1217,11 +1217,9 @@ flow_verbs_validate(struct rte_eth_dev *dev,
> > * The size of the memory needed for all actions.
> > */
> > static int
> > -flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
> > - uint64_t *action_flags)
> > +flow_verbs_get_actions_and_size(const struct rte_flow_action actions[])
> > {
> > int size = 0;
> > - uint64_t detected_actions = 0;
> >
> > for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> > switch (actions->type) {
> > @@ -1229,34 +1227,27 @@ flow_verbs_get_actions_and_size(const struct
> > rte_flow_action actions[],
> > break;
> > case RTE_FLOW_ACTION_TYPE_FLAG:
> > size += sizeof(struct ibv_flow_spec_action_tag);
> > - detected_actions |= MLX5_FLOW_ACTION_FLAG;
> > break;
> > case RTE_FLOW_ACTION_TYPE_MARK:
> > size += sizeof(struct ibv_flow_spec_action_tag);
> > - detected_actions |= MLX5_FLOW_ACTION_MARK;
> > break;
> > case RTE_FLOW_ACTION_TYPE_DROP:
> > size += sizeof(struct ibv_flow_spec_action_drop);
> > - detected_actions |= MLX5_FLOW_ACTION_DROP;
> > break;
> > case RTE_FLOW_ACTION_TYPE_QUEUE:
> > - detected_actions |= MLX5_FLOW_ACTION_QUEUE;
> > break;
> > case RTE_FLOW_ACTION_TYPE_RSS:
> > - detected_actions |= MLX5_FLOW_ACTION_RSS;
> > break;
> > case RTE_FLOW_ACTION_TYPE_COUNT:
> > #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
> > defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
> > size += sizeof(struct ibv_flow_spec_counter_action);
> > #endif
> > - detected_actions |= MLX5_FLOW_ACTION_COUNT;
> > break;
> > default:
> > break;
> > }
> > }
> > - *action_flags = detected_actions;
> > return size;
> > }
> >
> > @@ -1274,83 +1265,54 @@ flow_verbs_get_actions_and_size(const struct
> > rte_flow_action actions[],
> > * The size of the memory needed for all items.
> > */
> > static int
> > -flow_verbs_get_items_and_size(const struct rte_flow_item items[],
> > - uint64_t *item_flags)
> > +flow_verbs_get_items_and_size(const struct rte_flow_item items[])
> > {
> > int size = 0;
> > - uint64_t detected_items = 0;
> >
> > for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> > - int tunnel = !!(detected_items &
> > MLX5_FLOW_LAYER_TUNNEL);
> > -
> > switch (items->type) {
> > case RTE_FLOW_ITEM_TYPE_VOID:
> > break;
> > case RTE_FLOW_ITEM_TYPE_ETH:
> > size += sizeof(struct ibv_flow_spec_eth);
> > - detected_items |= tunnel ?
> > MLX5_FLOW_LAYER_INNER_L2 :
> > -
> > MLX5_FLOW_LAYER_OUTER_L2;
> > break;
> > case RTE_FLOW_ITEM_TYPE_VLAN:
> > size += sizeof(struct ibv_flow_spec_eth);
> > - detected_items |=
> > - tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
> > - MLX5_FLOW_LAYER_INNER_VLAN) :
> > - (MLX5_FLOW_LAYER_OUTER_L2 |
> > - MLX5_FLOW_LAYER_OUTER_VLAN);
> > break;
> > case RTE_FLOW_ITEM_TYPE_IPV4:
> > size += sizeof(struct ibv_flow_spec_ipv4_ext);
> > - detected_items |= tunnel ?
> > - MLX5_FLOW_LAYER_INNER_L3_IPV4
> > :
> > -
> > MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> > break;
> > case RTE_FLOW_ITEM_TYPE_IPV6:
> > size += sizeof(struct ibv_flow_spec_ipv6);
> > - detected_items |= tunnel ?
> > - MLX5_FLOW_LAYER_INNER_L3_IPV6
> > :
> > -
> > MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> > break;
> > case RTE_FLOW_ITEM_TYPE_UDP:
> > size += sizeof(struct ibv_flow_spec_tcp_udp);
> > - detected_items |= tunnel ?
> > - MLX5_FLOW_LAYER_INNER_L4_UDP
> > :
> > -
> > MLX5_FLOW_LAYER_OUTER_L4_UDP;
> > break;
> > case RTE_FLOW_ITEM_TYPE_TCP:
> > size += sizeof(struct ibv_flow_spec_tcp_udp);
> > - detected_items |= tunnel ?
> > - MLX5_FLOW_LAYER_INNER_L4_TCP :
> > -
> > MLX5_FLOW_LAYER_OUTER_L4_TCP;
> > break;
> > case RTE_FLOW_ITEM_TYPE_VXLAN:
> > size += sizeof(struct ibv_flow_spec_tunnel);
> > - detected_items |= MLX5_FLOW_LAYER_VXLAN;
> > break;
> > case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> > size += sizeof(struct ibv_flow_spec_tunnel);
> > - detected_items |= MLX5_FLOW_LAYER_VXLAN_GPE;
> > break;
> > #ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
> > case RTE_FLOW_ITEM_TYPE_GRE:
> > size += sizeof(struct ibv_flow_spec_gre);
> > - detected_items |= MLX5_FLOW_LAYER_GRE;
> > break;
> > case RTE_FLOW_ITEM_TYPE_MPLS:
> > size += sizeof(struct ibv_flow_spec_mpls);
> > - detected_items |= MLX5_FLOW_LAYER_MPLS;
> > break;
> > #else
> > case RTE_FLOW_ITEM_TYPE_GRE:
> > size += sizeof(struct ibv_flow_spec_tunnel);
> > - detected_items |= MLX5_FLOW_LAYER_TUNNEL;
> > break;
> > #endif
> > default:
> > break;
> > }
> > }
> > - *item_flags = detected_items;
> > return size;
> > }
> >
> > @@ -1365,10 +1327,6 @@ flow_verbs_get_items_and_size(const struct
> > rte_flow_item items[],
> > * Pointer to the list of items.
> > * @param[in] actions
> > * Pointer to the list of actions.
> > - * @param[out] item_flags
> > - * Pointer to bit mask of all items detected.
> > - * @param[out] action_flags
> > - * Pointer to bit mask of all actions detected.
> > * @param[out] error
> > * Pointer to the error structure.
> > *
> > @@ -1380,15 +1338,13 @@ static struct mlx5_flow *
> > flow_verbs_prepare(const struct rte_flow_attr *attr __rte_unused,
> > const struct rte_flow_item items[],
> > const struct rte_flow_action actions[],
> > - uint64_t *item_flags,
> > - uint64_t *action_flags,
> > struct rte_flow_error *error)
> > {
> > uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
> > struct mlx5_flow *flow;
> >
> > - size += flow_verbs_get_actions_and_size(actions, action_flags);
> > - size += flow_verbs_get_items_and_size(items, item_flags);
> > + size += flow_verbs_get_actions_and_size(actions);
>
> I think the function name should be changed since it only returns the size.
>
> > + size += flow_verbs_get_items_and_size(items);
>
> I think the function name should be changed since it only returns the size.
Agree.
I have to rebase the code anyway as VXLAN has been merged. Will change it.
Thanks,
Yongseok
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-05 5:37 ` Yongseok Koh
@ 2018-11-05 6:08 ` Ori Kam
2018-11-05 6:43 ` Yongseok Koh
0 siblings, 1 reply; 17+ messages in thread
From: Ori Kam @ 2018-11-05 6:08 UTC (permalink / raw)
To: Yongseok Koh; +Cc: Shahaf Shuler, dev
> -----Original Message-----
> From: Yongseok Koh
> Sent: Monday, November 5, 2018 7:38 AM
> To: Ori Kam <orika@mellanox.com>
> Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
> Subject: Re: [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
>
> On Sun, Nov 04, 2018 at 01:22:34AM -0700, Ori Kam wrote:
> >
> > > -----Original Message-----
> > > From: Yongseok Koh
> > > Sent: Friday, November 2, 2018 11:08 PM
> > > To: Shahaf Shuler <shahafs@mellanox.com>
> > > Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>; Ori Kam
> > > <orika@mellanox.com>
> > > Subject: [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
> > >
> > > 1) Fix layer parsing
> > > In translation of tunneled flows, dev_flow->layers must not be used to
> > > check tunneled layer as it contains all the layers parsed from
> > > flow_drv_prepare(). Checking tunneled layer is needed to distinguish
> > > between outer and inner item. This should be based on dynamic parsing.
> With
> > > dev_flow->layers on a tunneled flow, items will always be interpreted as
> > > inner as dev_flow->layer already has all the items. Dynamic parsing
> > > (item_flags) is added as there's no such code.
> > >
> > > 2) Refactoring code
> > > - flow_dv_create_item() and flow_dv_create_action() are merged into
> > > flow_dv_translate() for consistency with Verbs and *_validate().
> >
> > I don't like the idea of combining 2 distinct functions into one.
> > I think a function should be as short as possible and do only one thing,
> > if there is no good reason why two functions should be combined they should
> not
> > be combined.
> > If you want to align both the Direct Verbs and Verbs I think we can split the
> Verbs
> > code.
>
> Look at the other lengthy switch-case clauses in validate/prepare/translate in
> each driver. This DV translate is the only exception. I'd rather like to ask
> why. I didn't like the lengthy function from the beginning but you wanted to
> keep it. Of course, I considered to split the Verbs one but that's the reason
> why I chose to merge DV code. If we feel this lengthy func is really complex and
> gets error prone, then we can refactor all the code at once later. Or, I still
> prefer the graph approach. That would be simpler.
>
I agree with you that all functions should have been split( No excuse also kept the basic
structure as it was), specific in this one I had extra reason to make the split since
creation of items also need adding matcher so it was different.
In any case I agree that we should have consistency with other functions so ether we change
them all or just this one. I think do to time let's do what you suggested and change only this one
or just leave it as is.
Regarding the graph approach I think we should wait to see if the current approach is good
enough and in next releases maybe switch to the graph approach.
> Thanks,
> Yongseok
Thanks,
Ori
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-05 6:08 ` Ori Kam
@ 2018-11-05 6:43 ` Yongseok Koh
0 siblings, 0 replies; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 6:43 UTC (permalink / raw)
To: Ori Kam; +Cc: Shahaf Shuler, dev
> On Nov 4, 2018, at 10:08 PM, Ori Kam <orika@mellanox.com> wrote:
>
>
>
>> -----Original Message-----
>> From: Yongseok Koh
>> Sent: Monday, November 5, 2018 7:38 AM
>> To: Ori Kam <orika@mellanox.com>
>> Cc: Shahaf Shuler <shahafs@mellanox.com>; dev@dpdk.org
>> Subject: Re: [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
>>
>> On Sun, Nov 04, 2018 at 01:22:34AM -0700, Ori Kam wrote:
>>>
>>>> -----Original Message-----
>>>> From: Yongseok Koh
>>>> Sent: Friday, November 2, 2018 11:08 PM
>>>> To: Shahaf Shuler <shahafs@mellanox.com>
>>>> Cc: dev@dpdk.org; Yongseok Koh <yskoh@mellanox.com>; Ori Kam
>>>> <orika@mellanox.com>
>>>> Subject: [PATCH 2/3] net/mlx5: fix Direct Verbs flow tunnel
>>>>
>>>> 1) Fix layer parsing
>>>> In translation of tunneled flows, dev_flow->layers must not be used to
>>>> check tunneled layer as it contains all the layers parsed from
>>>> flow_drv_prepare(). Checking tunneled layer is needed to distinguish
>>>> between outer and inner item. This should be based on dynamic parsing.
>> With
>>>> dev_flow->layers on a tunneled flow, items will always be interpreted as
>>>> inner as dev_flow->layer already has all the items. Dynamic parsing
>>>> (item_flags) is added as there's no such code.
>>>>
>>>> 2) Refactoring code
>>>> - flow_dv_create_item() and flow_dv_create_action() are merged into
>>>> flow_dv_translate() for consistency with Verbs and *_validate().
>>>
>>> I don't like the idea of combining 2 distinct functions into one.
>>> I think a function should be as short as possible and do only one thing,
>>> if there is no good reason why two functions should be combined they should
>> not
>>> be combined.
>>> If you want to align both the Direct Verbs and Verbs I think we can split the
>> Verbs
>>> code.
>>
>> Look at the other lengthy switch-case clauses in validate/prepare/translate in
>> each driver. This DV translate is the only exception. I'd rather like to ask
>> why. I didn't like the lengthy function from the beginning but you wanted to
>> keep it. Of course, I considered to split the Verbs one but that's the reason
>> why I chose to merge DV code. If we feel this lengthy func is really complex and
>> gets error prone, then we can refactor all the code at once later. Or, I still
>> prefer the graph approach. That would be simpler.
>>
>
> I agree with you that all functions should have been split( No excuse also kept the basic
> structure as it was), specific in this one I had extra reason to make the split since
> creation of items also need adding matcher so it was different.
> In any case I agree that we should have consistency with other functions so ether we change
> them all or just this one. I think do to time let's do what you suggested and change only this one
> or just leave it as is.
> Regarding the graph approach I think we should wait to see if the current approach is good
> enough and in next releases maybe switch to the graph approach.
Thanks for understanding. Will push v2 once the unit test is done.
Yongseok,
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow
2018-11-02 21:08 [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
` (2 preceding siblings ...)
2018-11-04 8:17 ` [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Ori Kam
@ 2018-11-05 7:20 ` Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
` (3 more replies)
3 siblings, 4 replies; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 7:20 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Ori Kam, Yongseok Koh
v2:
* rebase on top of the latest branch tip
* change function names appropriately
Yongseok Koh (3):
net/mlx5: fix Verbs flow tunnel
net/mlx5: fix Direct Verbs flow tunnel
net/mlx5: remove flags setting from flow preparation
drivers/net/mlx5/mlx5_flow.c | 38 +--
drivers/net/mlx5/mlx5_flow.h | 3 +-
drivers/net/mlx5/mlx5_flow_dv.c | 500 ++++++++++++++---------------
drivers/net/mlx5/mlx5_flow_tcf.c | 39 +--
drivers/net/mlx5/mlx5_flow_verbs.c | 622 ++++++++++++++++---------------------
5 files changed, 527 insertions(+), 675 deletions(-)
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 1/3] net/mlx5: fix Verbs flow tunnel
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
@ 2018-11-05 7:20 ` Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: fix Direct " Yongseok Koh
` (2 subsequent siblings)
3 siblings, 0 replies; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 7:20 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Ori Kam, Yongseok Koh
1) Fix layer parsing
In translation of tunneled flows, dev_flow->layers must not be used to
check tunneled layer as it contains all the layers parsed from
flow_drv_prepare(). Checking tunneled layer is needed to set
IBV_FLOW_SPEC_INNER and it should be based on dynamic parsing. With
dev_flow->layers on a tunneled flow, items will always be interpreted as
inner as dev_flow->layer already has all the items.
2) Refactoring code
It is partly because flow_verbs_translate_item_*() sets layer flag. Same
code is repeating in multiple locations and that could be error-prone.
- Introduce VERBS_SPEC_INNER() to unify setting IBV_FLOW_SPEC_INNER.
- flow_verbs_translate_item_*() doesn't set parsing result -
MLX5_FLOW_LAYER_*.
- flow_verbs_translate_item_*() doesn't set priority or adjust hashfields
but does only item translation. Both have to be done outside.
- Make more consistent between Verbs and DV.
3) Remove flow_verbs_mark_update()
This code can never be reached as validation prohibits specifying mark and
flag actions together. No need to convert flag to mark.
Fixes: 84c406e74524 ("net/mlx5: add flow translate function")
Cc: orika@mellanox.com
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
Acked-by: Ori Kam <orika@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_verbs.c | 568 +++++++++++++++++--------------------
1 file changed, 258 insertions(+), 310 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 54ac620c72..43fcd0d29e 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -33,6 +33,9 @@
#include "mlx5_glue.h"
#include "mlx5_flow.h"
+#define VERBS_SPEC_INNER(item_flags) \
+ (!!((item_flags) & MLX5_FLOW_LAYER_TUNNEL) ? IBV_FLOW_SPEC_INNER : 0)
+
/**
* Create Verbs flow counter with Verbs library.
*
@@ -231,27 +234,26 @@ flow_verbs_counter_query(struct rte_eth_dev *dev __rte_unused,
}
/**
- * Add a verbs item specification into @p flow.
+ * Add a verbs item specification into @p verbs.
*
- * @param[in, out] flow
- * Pointer to flow structure.
+ * @param[out] verbs
+ * Pointer to verbs structure.
* @param[in] src
* Create specification.
* @param[in] size
* Size in bytes of the specification to copy.
*/
static void
-flow_verbs_spec_add(struct mlx5_flow *flow, void *src, unsigned int size)
+flow_verbs_spec_add(struct mlx5_flow_verbs *verbs, void *src, unsigned int size)
{
- struct mlx5_flow_verbs *verbs = &flow->verbs;
+ void *dst;
- if (verbs->specs) {
- void *dst;
-
- dst = (void *)(verbs->specs + verbs->size);
- memcpy(dst, src, size);
- ++verbs->attr->num_of_specs;
- }
+ if (!verbs)
+ return;
+ assert(verbs->specs);
+ dst = (void *)(verbs->specs + verbs->size);
+ memcpy(dst, src, size);
+ ++verbs->attr->num_of_specs;
verbs->size += size;
}
@@ -260,24 +262,23 @@ flow_verbs_spec_add(struct mlx5_flow *flow, void *src, unsigned int size)
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
* @param[in] item_flags
- * Bit field with all detected items.
- * @param[in, out] dev_flow
- * Pointer to dev_flow structure.
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_eth(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_eth(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_eth *spec = item->spec;
const struct rte_flow_item_eth *mask = item->mask;
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
const unsigned int size = sizeof(struct ibv_flow_spec_eth);
struct ibv_flow_spec_eth eth = {
- .type = IBV_FLOW_SPEC_ETH | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_ETH | VERBS_SPEC_INNER(item_flags),
.size = size,
};
@@ -298,11 +299,8 @@ flow_verbs_translate_item_eth(const struct rte_flow_item *item,
eth.val.src_mac[i] &= eth.mask.src_mac[i];
}
eth.val.ether_type &= eth.mask.ether_type;
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
}
- flow_verbs_spec_add(dev_flow, ð, size);
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
- MLX5_FLOW_LAYER_OUTER_L2;
+ flow_verbs_spec_add(&dev_flow->verbs, ð, size);
}
/**
@@ -344,24 +342,24 @@ flow_verbs_item_vlan_update(struct ibv_flow_attr *attr,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
- * @param[in] item
- * Item specification.
- * @param[in, out] item_flags
- * Bit mask that holds all detected items.
* @param[in, out] dev_flow
* Pointer to dev_flow structure.
+ * @param[in] item
+ * Item specification.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_vlan(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_vlan *spec = item->spec;
const struct rte_flow_item_vlan *mask = item->mask;
unsigned int size = sizeof(struct ibv_flow_spec_eth);
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ const int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
struct ibv_flow_spec_eth eth = {
- .type = IBV_FLOW_SPEC_ETH | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_ETH | VERBS_SPEC_INNER(item_flags),
.size = size,
};
const uint32_t l2m = tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
@@ -377,16 +375,10 @@ flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
eth.mask.ether_type = mask->inner_type;
eth.val.ether_type &= eth.mask.ether_type;
}
- if (!(*item_flags & l2m)) {
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- flow_verbs_spec_add(dev_flow, ð, size);
- } else {
+ if (!(item_flags & l2m))
+ flow_verbs_spec_add(&dev_flow->verbs, ð, size);
+ else
flow_verbs_item_vlan_update(dev_flow->verbs.attr, ð);
- size = 0; /* Only an update is done in eth specification. */
- }
- *item_flags |= tunnel ?
- (MLX5_FLOW_LAYER_INNER_L2 | MLX5_FLOW_LAYER_INNER_VLAN) :
- (MLX5_FLOW_LAYER_OUTER_L2 | MLX5_FLOW_LAYER_OUTER_VLAN);
}
/**
@@ -394,32 +386,28 @@ flow_verbs_translate_item_vlan(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_ipv4(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_ipv4 *spec = item->spec;
const struct rte_flow_item_ipv4 *mask = item->mask;
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
unsigned int size = sizeof(struct ibv_flow_spec_ipv4_ext);
struct ibv_flow_spec_ipv4_ext ipv4 = {
- .type = IBV_FLOW_SPEC_IPV4_EXT |
- (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_IPV4_EXT | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
mask = &rte_flow_item_ipv4_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV4;
if (spec) {
ipv4.val = (struct ibv_flow_ipv4_ext_filter){
.src_ip = spec->hdr.src_addr,
@@ -439,12 +427,7 @@ flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
ipv4.val.proto &= ipv4.mask.proto;
ipv4.val.tos &= ipv4.mask.tos;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel,
- MLX5_IPV4_LAYER_TYPES,
- MLX5_IPV4_IBV_RX_HASH);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L3;
- flow_verbs_spec_add(dev_flow, &ipv4, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &ipv4, size);
}
/**
@@ -452,31 +435,28 @@ flow_verbs_translate_item_ipv4(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_ipv6(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags)
{
const struct rte_flow_item_ipv6 *spec = item->spec;
const struct rte_flow_item_ipv6 *mask = item->mask;
- const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
unsigned int size = sizeof(struct ibv_flow_spec_ipv6);
struct ibv_flow_spec_ipv6 ipv6 = {
- .type = IBV_FLOW_SPEC_IPV6 | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ .type = IBV_FLOW_SPEC_IPV6 | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
mask = &rte_flow_item_ipv6_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV6;
if (spec) {
unsigned int i;
uint32_t vtc_flow_val;
@@ -516,12 +496,7 @@ flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
ipv6.val.next_hdr &= ipv6.mask.next_hdr;
ipv6.val.hop_limit &= ipv6.mask.hop_limit;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel,
- MLX5_IPV6_LAYER_TYPES,
- MLX5_IPV6_IBV_RX_HASH);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L3;
- flow_verbs_spec_add(dev_flow, &ipv6, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &ipv6, size);
}
/**
@@ -529,46 +504,38 @@ flow_verbs_translate_item_ipv6(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_udp(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_tcp(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
- const struct rte_flow_item_udp *spec = item->spec;
- const struct rte_flow_item_udp *mask = item->mask;
- const int tunnel = !!(*item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ const struct rte_flow_item_tcp *spec = item->spec;
+ const struct rte_flow_item_tcp *mask = item->mask;
unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
- struct ibv_flow_spec_tcp_udp udp = {
- .type = IBV_FLOW_SPEC_UDP | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ struct ibv_flow_spec_tcp_udp tcp = {
+ .type = IBV_FLOW_SPEC_TCP | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
- mask = &rte_flow_item_udp_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
- MLX5_FLOW_LAYER_OUTER_L4_UDP;
+ mask = &rte_flow_item_tcp_mask;
if (spec) {
- udp.val.dst_port = spec->hdr.dst_port;
- udp.val.src_port = spec->hdr.src_port;
- udp.mask.dst_port = mask->hdr.dst_port;
- udp.mask.src_port = mask->hdr.src_port;
+ tcp.val.dst_port = spec->hdr.dst_port;
+ tcp.val.src_port = spec->hdr.src_port;
+ tcp.mask.dst_port = mask->hdr.dst_port;
+ tcp.mask.src_port = mask->hdr.src_port;
/* Remove unwanted bits from values. */
- udp.val.src_port &= udp.mask.src_port;
- udp.val.dst_port &= udp.mask.dst_port;
+ tcp.val.src_port &= tcp.mask.src_port;
+ tcp.val.dst_port &= tcp.mask.dst_port;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_UDP,
- (IBV_RX_HASH_SRC_PORT_UDP |
- IBV_RX_HASH_DST_PORT_UDP));
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L4;
- flow_verbs_spec_add(dev_flow, &udp, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &tcp, size);
}
/**
@@ -576,46 +543,38 @@ flow_verbs_translate_item_udp(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_tcp(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_udp(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
- const struct rte_flow_item_tcp *spec = item->spec;
- const struct rte_flow_item_tcp *mask = item->mask;
- const int tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
+ const struct rte_flow_item_udp *spec = item->spec;
+ const struct rte_flow_item_udp *mask = item->mask;
unsigned int size = sizeof(struct ibv_flow_spec_tcp_udp);
- struct ibv_flow_spec_tcp_udp tcp = {
- .type = IBV_FLOW_SPEC_TCP | (tunnel ? IBV_FLOW_SPEC_INNER : 0),
+ struct ibv_flow_spec_tcp_udp udp = {
+ .type = IBV_FLOW_SPEC_UDP | VERBS_SPEC_INNER(item_flags),
.size = size,
};
if (!mask)
- mask = &rte_flow_item_tcp_mask;
- *item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
- MLX5_FLOW_LAYER_OUTER_L4_TCP;
+ mask = &rte_flow_item_udp_mask;
if (spec) {
- tcp.val.dst_port = spec->hdr.dst_port;
- tcp.val.src_port = spec->hdr.src_port;
- tcp.mask.dst_port = mask->hdr.dst_port;
- tcp.mask.src_port = mask->hdr.src_port;
+ udp.val.dst_port = spec->hdr.dst_port;
+ udp.val.src_port = spec->hdr.src_port;
+ udp.mask.dst_port = mask->hdr.dst_port;
+ udp.mask.src_port = mask->hdr.src_port;
/* Remove unwanted bits from values. */
- tcp.val.src_port &= tcp.mask.src_port;
- tcp.val.dst_port &= tcp.mask.dst_port;
+ udp.val.src_port &= udp.mask.src_port;
+ udp.val.dst_port &= udp.mask.dst_port;
}
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, tunnel, ETH_RSS_TCP,
- (IBV_RX_HASH_SRC_PORT_TCP |
- IBV_RX_HASH_DST_PORT_TCP));
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L4;
- flow_verbs_spec_add(dev_flow, &tcp, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &udp, size);
}
/**
@@ -623,17 +582,17 @@ flow_verbs_translate_item_tcp(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_vxlan(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
const struct rte_flow_item_vxlan *spec = item->spec;
const struct rte_flow_item_vxlan *mask = item->mask;
@@ -657,9 +616,7 @@ flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
/* Remove unwanted bits from values. */
vxlan.val.tunnel_id &= vxlan.mask.tunnel_id;
}
- flow_verbs_spec_add(dev_flow, &vxlan, size);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- *item_flags |= MLX5_FLOW_LAYER_VXLAN;
+ flow_verbs_spec_add(&dev_flow->verbs, &vxlan, size);
}
/**
@@ -667,17 +624,17 @@ flow_verbs_translate_item_vxlan(const struct rte_flow_item *item,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_vxlan_gpe(const struct rte_flow_item *item,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_vxlan_gpe(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item,
+ uint64_t item_flags __rte_unused)
{
const struct rte_flow_item_vxlan_gpe *spec = item->spec;
const struct rte_flow_item_vxlan_gpe *mask = item->mask;
@@ -701,9 +658,7 @@ flow_verbs_translate_item_vxlan_gpe(const struct rte_flow_item *item,
/* Remove unwanted bits from values. */
vxlan_gpe.val.tunnel_id &= vxlan_gpe.mask.tunnel_id;
}
- flow_verbs_spec_add(dev_flow, &vxlan_gpe, size);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- *item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
+ flow_verbs_spec_add(&dev_flow->verbs, &vxlan_gpe, size);
}
/**
@@ -763,17 +718,17 @@ flow_verbs_item_gre_ip_protocol_update(struct ibv_flow_attr *attr,
* the input is valid and that there is space to insert the requested item
* into the flow.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
- uint64_t *item_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_item_gre(struct mlx5_flow *dev_flow,
+ const struct rte_flow_item *item __rte_unused,
+ uint64_t item_flags)
{
struct mlx5_flow_verbs *verbs = &dev_flow->verbs;
#ifndef HAVE_IBV_DEVICE_MPLS_SUPPORT
@@ -804,7 +759,7 @@ flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
tunnel.val.key &= tunnel.mask.key;
}
#endif
- if (*item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4)
+ if (item_flags & MLX5_FLOW_LAYER_OUTER_L3_IPV4)
flow_verbs_item_gre_ip_protocol_update(verbs->attr,
IBV_FLOW_SPEC_IPV4_EXT,
IPPROTO_GRE);
@@ -812,9 +767,7 @@ flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
flow_verbs_item_gre_ip_protocol_update(verbs->attr,
IBV_FLOW_SPEC_IPV6,
IPPROTO_GRE);
- flow_verbs_spec_add(dev_flow, &tunnel, size);
- verbs->attr->priority = MLX5_PRIORITY_MAP_L2;
- *item_flags |= MLX5_FLOW_LAYER_GRE;
+ flow_verbs_spec_add(verbs, &tunnel, size);
}
/**
@@ -822,17 +775,17 @@ flow_verbs_translate_item_gre(const struct rte_flow_item *item __rte_unused,
* the input is valid and that there is space to insert the requested action
* into the flow. This function also return the action that was added.
*
+ * @param[in, out] dev_flow
+ * Pointer to dev_flow structure.
* @param[in] item
* Item specification.
- * @param[in, out] item_flags
- * Bit mask that marks all detected items.
- * @param[in, out] dev_flow
- * Pointer to sepacific flow structure.
+ * @param[in] item_flags
+ * Parsed item flags.
*/
static void
-flow_verbs_translate_item_mpls(const struct rte_flow_item *item __rte_unused,
- uint64_t *action_flags __rte_unused,
- struct mlx5_flow *dev_flow __rte_unused)
+flow_verbs_translate_item_mpls(struct mlx5_flow *dev_flow __rte_unused,
+ const struct rte_flow_item *item __rte_unused,
+ uint64_t item_flags __rte_unused)
{
#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
const struct rte_flow_item_mpls *spec = item->spec;
@@ -851,25 +804,24 @@ flow_verbs_translate_item_mpls(const struct rte_flow_item *item __rte_unused,
/* Remove unwanted bits from values. */
mpls.val.label &= mpls.mask.label;
}
- flow_verbs_spec_add(dev_flow, &mpls, size);
- dev_flow->verbs.attr->priority = MLX5_PRIORITY_MAP_L2;
- *action_flags |= MLX5_FLOW_LAYER_MPLS;
+ flow_verbs_spec_add(&dev_flow->verbs, &mpls, size);
#endif
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
-flow_verbs_translate_action_drop(uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_drop
+ (struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action __rte_unused)
{
unsigned int size = sizeof(struct ibv_flow_spec_action_drop);
struct ibv_flow_spec_action_drop drop = {
@@ -877,26 +829,22 @@ flow_verbs_translate_action_drop(uint64_t *action_flags,
.size = size,
};
- flow_verbs_spec_add(dev_flow, &drop, size);
- *action_flags |= MLX5_FLOW_ACTION_DROP;
+ flow_verbs_spec_add(&dev_flow->verbs, &drop, size);
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in] action
- * Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
-flow_verbs_translate_action_queue(const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_queue(struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action)
{
const struct rte_flow_action_queue *queue = action->conf;
struct rte_flow *flow = dev_flow->flow;
@@ -904,13 +852,12 @@ flow_verbs_translate_action_queue(const struct rte_flow_action *action,
if (flow->queue)
(*flow->queue)[0] = queue->index;
flow->rss.queue_num = 1;
- *action_flags |= MLX5_FLOW_ACTION_QUEUE;
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
* @param[in] action
* Action configuration.
@@ -920,9 +867,8 @@ flow_verbs_translate_action_queue(const struct rte_flow_action *action,
* Pointer to mlx5_flow.
*/
static void
-flow_verbs_translate_action_rss(const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_rss(struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action)
{
const struct rte_flow_action_rss *rss = action->conf;
const uint8_t *rss_key;
@@ -938,26 +884,22 @@ flow_verbs_translate_action_rss(const struct rte_flow_action *action,
/* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
flow->rss.level = rss->level;
- *action_flags |= MLX5_FLOW_ACTION_RSS;
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in] action
- * Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
flow_verbs_translate_action_flag
- (const struct rte_flow_action *action __rte_unused,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+ (struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action __rte_unused)
{
unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
struct ibv_flow_spec_action_tag tag = {
@@ -965,87 +907,44 @@ flow_verbs_translate_action_flag
.size = size,
.tag_id = mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT),
};
- *action_flags |= MLX5_FLOW_ACTION_MARK;
- flow_verbs_spec_add(dev_flow, &tag, size);
-}
-/**
- * Update verbs specification to modify the flag to mark.
- *
- * @param[in, out] verbs
- * Pointer to the mlx5_flow_verbs structure.
- * @param[in] mark_id
- * Mark identifier to replace the flag.
- */
-static void
-flow_verbs_mark_update(struct mlx5_flow_verbs *verbs, uint32_t mark_id)
-{
- struct ibv_spec_header *hdr;
- int i;
-
- if (!verbs)
- return;
- /* Update Verbs specification. */
- hdr = (struct ibv_spec_header *)verbs->specs;
- if (!hdr)
- return;
- for (i = 0; i != verbs->attr->num_of_specs; ++i) {
- if (hdr->type == IBV_FLOW_SPEC_ACTION_TAG) {
- struct ibv_flow_spec_action_tag *t =
- (struct ibv_flow_spec_action_tag *)hdr;
-
- t->tag_id = mlx5_flow_mark_set(mark_id);
- }
- hdr = (struct ibv_spec_header *)((uintptr_t)hdr + hdr->size);
- }
+ flow_verbs_spec_add(&dev_flow->verbs, &tag, size);
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
- * @param[in] action
- * Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
+ * @param[in] action
+ * Action configuration.
*/
static void
-flow_verbs_translate_action_mark(const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow)
+flow_verbs_translate_action_mark(struct mlx5_flow *dev_flow,
+ const struct rte_flow_action *action)
{
const struct rte_flow_action_mark *mark = action->conf;
unsigned int size = sizeof(struct ibv_flow_spec_action_tag);
struct ibv_flow_spec_action_tag tag = {
.type = IBV_FLOW_SPEC_ACTION_TAG,
.size = size,
+ .tag_id = mlx5_flow_mark_set(mark->id),
};
- struct mlx5_flow_verbs *verbs = &dev_flow->verbs;
- if (*action_flags & MLX5_FLOW_ACTION_FLAG) {
- flow_verbs_mark_update(verbs, mark->id);
- size = 0;
- } else {
- tag.tag_id = mlx5_flow_mark_set(mark->id);
- flow_verbs_spec_add(dev_flow, &tag, size);
- }
- *action_flags |= MLX5_FLOW_ACTION_MARK;
+ flow_verbs_spec_add(&dev_flow->verbs, &tag, size);
}
/**
* Convert the @p action into a Verbs specification. This function assumes that
* the input is valid and that there is space to insert the requested action
- * into the flow. This function also return the action that was added.
+ * into the flow.
*
* @param[in] dev
* Pointer to the Ethernet device structure.
* @param[in] action
* Action configuration.
- * @param[in, out] action_flags
- * Pointer to the detected actions.
* @param[in] dev_flow
* Pointer to mlx5_flow.
* @param[out] error
@@ -1055,10 +954,9 @@ flow_verbs_translate_action_mark(const struct rte_flow_action *action,
* 0 On success else a negative errno value is returned and rte_errno is set.
*/
static int
-flow_verbs_translate_action_count(struct rte_eth_dev *dev,
+flow_verbs_translate_action_count(struct mlx5_flow *dev_flow,
const struct rte_flow_action *action,
- uint64_t *action_flags,
- struct mlx5_flow *dev_flow,
+ struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
const struct rte_flow_action_count *count = action->conf;
@@ -1082,13 +980,12 @@ flow_verbs_translate_action_count(struct rte_eth_dev *dev,
"cannot get counter"
" context.");
}
- *action_flags |= MLX5_FLOW_ACTION_COUNT;
#if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42)
counter.counter_set_handle = flow->counter->cs->handle;
- flow_verbs_spec_add(dev_flow, &counter, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
#elif defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
counter.counters = flow->counter->cs;
- flow_verbs_spec_add(dev_flow, &counter, size);
+ flow_verbs_spec_add(&dev_flow->verbs, &counter, size);
#endif
return 0;
}
@@ -1120,7 +1017,6 @@ flow_verbs_validate(struct rte_eth_dev *dev,
int ret;
uint64_t action_flags = 0;
uint64_t item_flags = 0;
- int tunnel = 0;
uint8_t next_protocol = 0xff;
if (items == NULL)
@@ -1129,9 +1025,9 @@ flow_verbs_validate(struct rte_eth_dev *dev,
if (ret < 0)
return ret;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
int ret = 0;
- tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_VOID:
break;
@@ -1148,8 +1044,10 @@ flow_verbs_validate(struct rte_eth_dev *dev,
error);
if (ret < 0)
return ret;
- item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_VLAN :
- MLX5_FLOW_LAYER_OUTER_VLAN;
+ item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
ret = mlx5_flow_validate_item_ipv4(items, item_flags,
@@ -1399,8 +1297,11 @@ flow_verbs_get_items_and_size(const struct rte_flow_item items[],
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
size += sizeof(struct ibv_flow_spec_eth);
- detected_items |= tunnel ? MLX5_FLOW_LAYER_INNER_VLAN :
- MLX5_FLOW_LAYER_OUTER_VLAN;
+ detected_items |=
+ tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
size += sizeof(struct ibv_flow_spec_ipv4_ext);
@@ -1532,50 +1433,48 @@ flow_verbs_translate(struct rte_eth_dev *dev,
const struct rte_flow_action actions[],
struct rte_flow_error *error)
{
- uint64_t action_flags = 0;
+ struct rte_flow *flow = dev_flow->flow;
uint64_t item_flags = 0;
+ uint64_t action_flags = 0;
uint64_t priority = attr->priority;
+ uint32_t subpriority = 0;
struct priv *priv = dev->data->dev_private;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
int ret;
+
switch (actions->type) {
case RTE_FLOW_ACTION_TYPE_VOID:
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
- flow_verbs_translate_action_flag(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_flag(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
- flow_verbs_translate_action_mark(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_mark(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
- flow_verbs_translate_action_drop(&action_flags,
- dev_flow);
+ flow_verbs_translate_action_drop(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
- flow_verbs_translate_action_queue(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_queue(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- flow_verbs_translate_action_rss(actions,
- &action_flags,
- dev_flow);
+ flow_verbs_translate_action_rss(dev_flow, actions);
+ action_flags |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
- ret = flow_verbs_translate_action_count(dev,
+ ret = flow_verbs_translate_action_count(dev_flow,
actions,
- &action_flags,
- dev_flow,
- error);
+ dev, error);
if (ret < 0)
return ret;
+ action_flags |= MLX5_FLOW_ACTION_COUNT;
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -1584,51 +1483,100 @@ flow_verbs_translate(struct rte_eth_dev *dev,
"action not supported");
}
}
- /* Device flow should have action flags by flow_drv_prepare(). */
- assert(dev_flow->flow->actions == action_flags);
+ flow->actions = action_flags;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_VOID:
break;
case RTE_FLOW_ITEM_TYPE_ETH:
- flow_verbs_translate_item_eth(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_eth(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
+ MLX5_FLOW_LAYER_OUTER_L2;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
- flow_verbs_translate_item_vlan(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_vlan(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
- flow_verbs_translate_item_ipv4(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_ipv4(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV4_LAYER_TYPES,
+ MLX5_IPV4_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV4;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
- flow_verbs_translate_item_ipv6(items, &item_flags,
- dev_flow);
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- flow_verbs_translate_item_udp(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_ipv6(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV6_LAYER_TYPES,
+ MLX5_IPV6_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV6;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
- flow_verbs_translate_item_tcp(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_tcp(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_TCP,
+ (IBV_RX_HASH_SRC_PORT_TCP |
+ IBV_RX_HASH_DST_PORT_TCP));
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
+ MLX5_FLOW_LAYER_OUTER_L4_TCP;
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ flow_verbs_translate_item_udp(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_UDP,
+ (IBV_RX_HASH_SRC_PORT_UDP |
+ IBV_RX_HASH_DST_PORT_UDP));
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
+ MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
- flow_verbs_translate_item_vxlan(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_vxlan(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_VXLAN;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- flow_verbs_translate_item_vxlan_gpe(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_vxlan_gpe(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
break;
case RTE_FLOW_ITEM_TYPE_GRE:
- flow_verbs_translate_item_gre(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_gre(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_GRE;
break;
case RTE_FLOW_ITEM_TYPE_MPLS:
- flow_verbs_translate_item_mpls(items, &item_flags,
- dev_flow);
+ flow_verbs_translate_item_mpls(dev_flow, items,
+ item_flags);
+ subpriority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= MLX5_FLOW_LAYER_MPLS;
break;
default:
return rte_flow_error_set(error, ENOTSUP,
@@ -1637,9 +1585,9 @@ flow_verbs_translate(struct rte_eth_dev *dev,
"item not supported");
}
}
+ dev_flow->layers = item_flags;
dev_flow->verbs.attr->priority =
- mlx5_flow_adjust_priority(dev, priority,
- dev_flow->verbs.attr->priority);
+ mlx5_flow_adjust_priority(dev, priority, subpriority);
return 0;
}
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
@ 2018-11-05 7:20 ` Yongseok Koh
2018-11-05 7:31 ` Ori Kam
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
2018-11-05 8:09 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Shahaf Shuler
3 siblings, 1 reply; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 7:20 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Ori Kam, Yongseok Koh
1) Fix layer parsing
In translation of tunneled flows, dev_flow->layers must not be used to
check tunneled layer as it contains all the layers parsed from
flow_drv_prepare(). Checking tunneled layer is needed to distinguish
between outer and inner item. This should be based on dynamic parsing. With
dev_flow->layers on a tunneled flow, items will always be interpreted as
inner as dev_flow->layer already has all the items. Dynamic parsing
(item_flags) is added as there's no such code.
2) Refactoring code
- flow_dv_create_item() and flow_dv_create_action() are merged into
flow_dv_translate() for consistency with Verbs and *_validate().
Fixes: 246636411536 ("net/mlx5: fix flow tunnel handling")
Fixes: d02cb0691299 ("net/mlx5: add Direct Verbs translate actions")
Fixes: fc2c498ccb94 ("net/mlx5: add Direct Verbs translate items")
Cc: orika@mellanox.com
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 494 +++++++++++++++++++---------------------
1 file changed, 237 insertions(+), 257 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 1d5b6bf60a..8b4d5956ba 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1602,252 +1602,6 @@ flow_dv_translate_item_meta(void *matcher, void *key,
}
}
-/**
- * Update the matcher and the value based the selected item.
- *
- * @param[in, out] matcher
- * Flow matcher.
- * @param[in, out] key
- * Flow matcher value.
- * @param[in] item
- * Flow pattern to translate.
- * @param[in, out] dev_flow
- * Pointer to the mlx5_flow.
- * @param[in] inner
- * Item is inner pattern.
- */
-static void
-flow_dv_create_item(void *matcher, void *key,
- const struct rte_flow_item *item,
- struct mlx5_flow *dev_flow,
- int inner)
-{
- struct mlx5_flow_dv_matcher *tmatcher = matcher;
-
- switch (item->type) {
- case RTE_FLOW_ITEM_TYPE_ETH:
- flow_dv_translate_item_eth(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L2;
- break;
- case RTE_FLOW_ITEM_TYPE_VLAN:
- flow_dv_translate_item_vlan(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV4:
- flow_dv_translate_item_ipv4(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- MLX5_IPV4_LAYER_TYPES,
- MLX5_IPV4_IBV_RX_HASH);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- flow_dv_translate_item_ipv6(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L3;
- dev_flow->dv.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- MLX5_IPV6_LAYER_TYPES,
- MLX5_IPV6_IBV_RX_HASH);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- flow_dv_translate_item_tcp(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->dv.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- ETH_RSS_TCP,
- (IBV_RX_HASH_SRC_PORT_TCP |
- IBV_RX_HASH_DST_PORT_TCP));
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- flow_dv_translate_item_udp(tmatcher->mask.buf, key, item,
- inner);
- tmatcher->priority = MLX5_PRIORITY_MAP_L4;
- dev_flow->verbs.hash_fields |=
- mlx5_flow_hashfields_adjust(dev_flow, inner,
- ETH_RSS_UDP,
- (IBV_RX_HASH_SRC_PORT_UDP |
- IBV_RX_HASH_DST_PORT_UDP));
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- flow_dv_translate_item_gre(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- flow_dv_translate_item_nvgre(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- flow_dv_translate_item_vxlan(tmatcher->mask.buf, key, item,
- inner);
- break;
- case RTE_FLOW_ITEM_TYPE_META:
- flow_dv_translate_item_meta(tmatcher->mask.buf, key, item);
- break;
- default:
- break;
- }
-}
-
-/**
- * Store the requested actions in an array.
- *
- * @param[in] dev
- * Pointer to rte_eth_dev structure.
- * @param[in] action
- * Flow action to translate.
- * @param[in, out] dev_flow
- * Pointer to the mlx5_flow.
- * @param[in] attr
- * Pointer to the flow attributes.
- * @param[out] error
- * Pointer to the error structure.
- *
- * @return
- * 0 on success, a negative errno value otherwise and rte_errno is set.
- */
-static int
-flow_dv_create_action(struct rte_eth_dev *dev,
- const struct rte_flow_action *action,
- struct mlx5_flow *dev_flow,
- const struct rte_flow_attr *attr,
- struct rte_flow_error *error)
-{
- const struct rte_flow_action_queue *queue;
- const struct rte_flow_action_rss *rss;
- int actions_n = dev_flow->dv.actions_n;
- struct rte_flow *flow = dev_flow->flow;
- const struct rte_flow_action *action_ptr = action;
- const uint8_t *rss_key;
-
- switch (action->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_FLAG:
- dev_flow->dv.actions[actions_n].type = MLX5DV_FLOW_ACTION_TAG;
- dev_flow->dv.actions[actions_n].tag_value =
- mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
- actions_n++;
- flow->actions |= MLX5_FLOW_ACTION_FLAG;
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- dev_flow->dv.actions[actions_n].type = MLX5DV_FLOW_ACTION_TAG;
- dev_flow->dv.actions[actions_n].tag_value =
- mlx5_flow_mark_set
- (((const struct rte_flow_action_mark *)
- (action->conf))->id);
- flow->actions |= MLX5_FLOW_ACTION_MARK;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_DROP:
- dev_flow->dv.actions[actions_n].type = MLX5DV_FLOW_ACTION_DROP;
- flow->actions |= MLX5_FLOW_ACTION_DROP;
- break;
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- queue = action->conf;
- flow->rss.queue_num = 1;
- (*flow->queue)[0] = queue->index;
- flow->actions |= MLX5_FLOW_ACTION_QUEUE;
- break;
- case RTE_FLOW_ACTION_TYPE_RSS:
- rss = action->conf;
- if (flow->queue)
- memcpy((*flow->queue), rss->queue,
- rss->queue_num * sizeof(uint16_t));
- flow->rss.queue_num = rss->queue_num;
- /* NULL RSS key indicates default RSS key. */
- rss_key = !rss->key ? rss_hash_default_key : rss->key;
- memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
- flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
- flow->rss.level = rss->level;
- /* Added to array only in apply since we need the QP */
- flow->actions |= MLX5_FLOW_ACTION_RSS;
- break;
- case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
- case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
- if (flow_dv_create_action_l2_encap(dev, action,
- dev_flow, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- flow->actions |= action->type ==
- RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
- MLX5_FLOW_ACTION_VXLAN_ENCAP :
- MLX5_FLOW_ACTION_NVGRE_ENCAP;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
- case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
- if (flow_dv_create_action_l2_decap(dev, dev_flow, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- flow->actions |= action->type ==
- RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
- MLX5_FLOW_ACTION_VXLAN_DECAP :
- MLX5_FLOW_ACTION_NVGRE_DECAP;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
- /* Handle encap action with preceding decap */
- if (flow->actions & MLX5_FLOW_ACTION_RAW_DECAP) {
- if (flow_dv_create_action_raw_encap(dev, action,
- dev_flow,
- attr, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- } else {
- /* Handle encap action without preceding decap */
- if (flow_dv_create_action_l2_encap(dev, action,
- dev_flow, error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- }
- flow->actions |= MLX5_FLOW_ACTION_RAW_ENCAP;
- actions_n++;
- break;
- case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
- /* Check if this decap action is followed by encap. */
- for (; action_ptr->type != RTE_FLOW_ACTION_TYPE_END &&
- action_ptr->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
- action_ptr++) {
- }
- /* Handle decap action only if it isn't followed by encap */
- if (action_ptr->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
- if (flow_dv_create_action_l2_decap(dev, dev_flow,
- error))
- return -rte_errno;
- dev_flow->dv.actions[actions_n].type =
- MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
- dev_flow->dv.actions[actions_n].action =
- dev_flow->dv.encap_decap->verbs_action;
- actions_n++;
- }
- /* If decap is followed by encap, handle it at encap case. */
- flow->actions |= MLX5_FLOW_ACTION_RAW_DECAP;
- break;
- default:
- break;
- }
- dev_flow->dv.actions_n = actions_n;
- return 0;
-}
-
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -1989,34 +1743,260 @@ flow_dv_translate(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct priv *priv = dev->data->dev_private;
+ struct rte_flow *flow = dev_flow->flow;
+ uint64_t item_flags = 0;
+ uint64_t action_flags = 0;
uint64_t priority = attr->priority;
struct mlx5_flow_dv_matcher matcher = {
.mask = {
.size = sizeof(matcher.mask.buf),
},
};
- void *match_value = dev_flow->dv.value.buf;
- int tunnel = 0;
+ int actions_n = 0;
if (priority == MLX5_FLOW_PRIO_RSVD)
priority = priv->config.flow_prio - 1;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
- tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
- flow_dv_create_item(&matcher, match_value, items, dev_flow,
- tunnel);
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ void *match_mask = matcher.mask.buf;
+ void *match_value = dev_flow->dv.value.buf;
+
+ switch (items->type) {
+ case RTE_FLOW_ITEM_TYPE_ETH:
+ flow_dv_translate_item_eth(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
+ MLX5_FLOW_LAYER_OUTER_L2;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VLAN:
+ flow_dv_translate_item_vlan(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L2;
+ item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
+ MLX5_FLOW_LAYER_INNER_VLAN) :
+ (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV4:
+ flow_dv_translate_item_ipv4(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->dv.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV4_LAYER_TYPES,
+ MLX5_IPV4_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV4 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV4;
+ break;
+ case RTE_FLOW_ITEM_TYPE_IPV6:
+ flow_dv_translate_item_ipv6(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L3;
+ dev_flow->dv.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel,
+ MLX5_IPV6_LAYER_TYPES,
+ MLX5_IPV6_IBV_RX_HASH);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L3_IPV6 :
+ MLX5_FLOW_LAYER_OUTER_L3_IPV6;
+ break;
+ case RTE_FLOW_ITEM_TYPE_TCP:
+ flow_dv_translate_item_tcp(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->dv.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_TCP,
+ IBV_RX_HASH_SRC_PORT_TCP |
+ IBV_RX_HASH_DST_PORT_TCP);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_TCP :
+ MLX5_FLOW_LAYER_OUTER_L4_TCP;
+ break;
+ case RTE_FLOW_ITEM_TYPE_UDP:
+ flow_dv_translate_item_udp(match_mask, match_value,
+ items, tunnel);
+ matcher.priority = MLX5_PRIORITY_MAP_L4;
+ dev_flow->verbs.hash_fields |=
+ mlx5_flow_hashfields_adjust
+ (dev_flow, tunnel, ETH_RSS_UDP,
+ IBV_RX_HASH_SRC_PORT_UDP |
+ IBV_RX_HASH_DST_PORT_UDP);
+ item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L4_UDP :
+ MLX5_FLOW_LAYER_OUTER_L4_UDP;
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ flow_dv_translate_item_gre(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_GRE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_NVGRE:
+ flow_dv_translate_item_nvgre(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_GRE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ flow_dv_translate_item_vxlan(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_VXLAN;
+ break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
+ flow_dv_translate_item_vxlan(match_mask, match_value,
+ items, tunnel);
+ item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
+ break;
+ case RTE_FLOW_ITEM_TYPE_META:
+ flow_dv_translate_item_meta(match_mask, match_value,
+ items);
+ item_flags |= MLX5_FLOW_ITEM_METADATA;
+ break;
+ default:
+ break;
+ }
}
+ dev_flow->layers = item_flags;
+ /* Register matcher. */
matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf,
- matcher.mask.size);
- if (priority == MLX5_FLOW_PRIO_RSVD)
- priority = priv->config.flow_prio - 1;
+ matcher.mask.size);
matcher.priority = mlx5_flow_adjust_priority(dev, priority,
matcher.priority);
matcher.egress = attr->egress;
if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
return -rte_errno;
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++)
- if (flow_dv_create_action(dev, actions, dev_flow, attr, error))
- return -rte_errno;
+ for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
+ const struct rte_flow_action_queue *queue;
+ const struct rte_flow_action_rss *rss;
+ const struct rte_flow_action *action = actions;
+ const uint8_t *rss_key;
+
+ switch (actions->type) {
+ case RTE_FLOW_ACTION_TYPE_VOID:
+ break;
+ case RTE_FLOW_ACTION_TYPE_FLAG:
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_TAG;
+ dev_flow->dv.actions[actions_n].tag_value =
+ mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
+ actions_n++;
+ action_flags |= MLX5_FLOW_ACTION_FLAG;
+ break;
+ case RTE_FLOW_ACTION_TYPE_MARK:
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_TAG;
+ dev_flow->dv.actions[actions_n].tag_value =
+ mlx5_flow_mark_set
+ (((const struct rte_flow_action_mark *)
+ (actions->conf))->id);
+ actions_n++;
+ action_flags |= MLX5_FLOW_ACTION_MARK;
+ break;
+ case RTE_FLOW_ACTION_TYPE_DROP:
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_DROP;
+ action_flags |= MLX5_FLOW_ACTION_DROP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_QUEUE:
+ queue = actions->conf;
+ flow->rss.queue_num = 1;
+ (*flow->queue)[0] = queue->index;
+ action_flags |= MLX5_FLOW_ACTION_QUEUE;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ rss = actions->conf;
+ if (flow->queue)
+ memcpy((*flow->queue), rss->queue,
+ rss->queue_num * sizeof(uint16_t));
+ flow->rss.queue_num = rss->queue_num;
+ /* NULL RSS key indicates default RSS key. */
+ rss_key = !rss->key ? rss_hash_default_key : rss->key;
+ memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
+ flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
+ flow->rss.level = rss->level;
+ action_flags |= MLX5_FLOW_ACTION_RSS;
+ break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
+ if (flow_dv_create_action_l2_encap(dev, actions,
+ dev_flow, error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ actions_n++;
+ action_flags |= actions->type ==
+ RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
+ MLX5_FLOW_ACTION_VXLAN_ENCAP :
+ MLX5_FLOW_ACTION_NVGRE_ENCAP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
+ if (flow_dv_create_action_l2_decap(dev, dev_flow,
+ error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ actions_n++;
+ action_flags |= actions->type ==
+ RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
+ MLX5_FLOW_ACTION_VXLAN_DECAP :
+ MLX5_FLOW_ACTION_NVGRE_DECAP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ /* Handle encap with preceding decap. */
+ if (action_flags & MLX5_FLOW_ACTION_RAW_DECAP) {
+ if (flow_dv_create_action_raw_encap
+ (dev, actions, dev_flow, attr, error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ } else {
+ /* Handle encap without preceding decap. */
+ if (flow_dv_create_action_l2_encap(dev, actions,
+ dev_flow,
+ error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ }
+ actions_n++;
+ action_flags |= MLX5_FLOW_ACTION_RAW_ENCAP;
+ break;
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ /* Check if this decap is followed by encap. */
+ for (; action->type != RTE_FLOW_ACTION_TYPE_END &&
+ action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
+ action++) {
+ }
+ /* Handle decap only if it isn't followed by encap. */
+ if (action->type != RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
+ if (flow_dv_create_action_l2_decap(dev,
+ dev_flow,
+ error))
+ return -rte_errno;
+ dev_flow->dv.actions[actions_n].type =
+ MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
+ dev_flow->dv.actions[actions_n].action =
+ dev_flow->dv.encap_decap->verbs_action;
+ actions_n++;
+ }
+ /* If decap is followed by encap, handle it at encap. */
+ action_flags |= MLX5_FLOW_ACTION_RAW_DECAP;
+ break;
+ default:
+ break;
+ }
+ }
+ dev_flow->dv.actions_n = actions_n;
+ flow->actions = action_flags;
return 0;
}
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* [dpdk-dev] [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: fix Direct " Yongseok Koh
@ 2018-11-05 7:20 ` Yongseok Koh
2018-11-05 7:32 ` Ori Kam
2018-11-05 8:09 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Shahaf Shuler
3 siblings, 1 reply; 17+ messages in thread
From: Yongseok Koh @ 2018-11-05 7:20 UTC (permalink / raw)
To: Shahaf Shuler; +Cc: dev, Ori Kam, Yongseok Koh
Even though flow_drv_prepare() takes item_flags and action_flags to be
filled in, those are not used and will be overwritten by parsing of
flow_drv_translate(). There's no reason to keep the flags and fill it.
Appropriate notes are added to the documentation of flow_drv_prepare() and
flow_drv_translate().
Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 38 ++++++++++------------
drivers/net/mlx5/mlx5_flow.h | 3 +-
drivers/net/mlx5/mlx5_flow_dv.c | 6 ----
drivers/net/mlx5/mlx5_flow_tcf.c | 39 +++++++----------------
drivers/net/mlx5/mlx5_flow_verbs.c | 64 +++++---------------------------------
5 files changed, 37 insertions(+), 113 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index be2cc6b93f..3c2ac4b377 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1663,8 +1663,6 @@ static struct mlx5_flow *
flow_null_prepare(const struct rte_flow_attr *attr __rte_unused,
const struct rte_flow_item items[] __rte_unused,
const struct rte_flow_action actions[] __rte_unused,
- uint64_t *item_flags __rte_unused,
- uint64_t *action_flags __rte_unused,
struct rte_flow_error *error __rte_unused)
{
rte_errno = ENOTSUP;
@@ -1792,16 +1790,19 @@ flow_drv_validate(struct rte_eth_dev *dev,
* calculates the size of memory required for device flow, allocates the memory,
* initializes the device flow and returns the pointer.
*
+ * @note
+ * This function initializes device flow structure such as dv, tcf or verbs in
+ * struct mlx5_flow. However, it is caller's responsibility to initialize the
+ * rest. For example, adding returning device flow to flow->dev_flow list and
+ * setting backward reference to the flow should be done out of this function.
+ * layers field is not filled either.
+ *
* @param[in] attr
* Pointer to the flow attributes.
* @param[in] items
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1809,12 +1810,10 @@ flow_drv_validate(struct rte_eth_dev *dev,
* Pointer to device flow on success, otherwise NULL and rte_ernno is set.
*/
static inline struct mlx5_flow *
-flow_drv_prepare(struct rte_flow *flow,
+flow_drv_prepare(const struct rte_flow *flow,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
const struct rte_flow_action actions[],
- uint64_t *item_flags,
- uint64_t *action_flags,
struct rte_flow_error *error)
{
const struct mlx5_flow_driver_ops *fops;
@@ -1822,8 +1821,7 @@ flow_drv_prepare(struct rte_flow *flow,
assert(type > MLX5_FLOW_TYPE_MIN && type < MLX5_FLOW_TYPE_MAX);
fops = flow_get_drv_ops(type);
- return fops->prepare(attr, items, actions, item_flags, action_flags,
- error);
+ return fops->prepare(attr, items, actions, error);
}
/**
@@ -1832,6 +1830,12 @@ flow_drv_prepare(struct rte_flow *flow,
* translates a generic flow into a driver flow. flow_drv_prepare() must
* precede.
*
+ * @note
+ * dev_flow->layers could be filled as a result of parsing during translation
+ * if needed by flow_drv_apply(). dev_flow->flow->actions can also be filled
+ * if necessary. As a flow can have multiple dev_flows by RSS flow expansion,
+ * flow->actions could be overwritten even though all the expanded dev_flows
+ * have the same actions.
*
* @param[in] dev
* Pointer to the rte dev structure.
@@ -1895,7 +1899,7 @@ flow_drv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
* Flow driver remove API. This abstracts calling driver specific functions.
* Parent flow (rte_flow) should have driver type (drv_type). It removes a flow
* on device. All the resources of the flow should be freed by calling
- * flow_dv_destroy().
+ * flow_drv_destroy().
*
* @param[in] dev
* Pointer to Ethernet device.
@@ -2026,8 +2030,6 @@ flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
{
struct rte_flow *flow = NULL;
struct mlx5_flow *dev_flow;
- uint64_t action_flags = 0;
- uint64_t item_flags = 0;
const struct rte_flow_action_rss *rss;
union {
struct rte_flow_expand_rss buf;
@@ -2070,16 +2072,10 @@ flow_list_create(struct rte_eth_dev *dev, struct mlx5_flows *list,
}
for (i = 0; i < buf->entries; ++i) {
dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
- actions, &item_flags, &action_flags,
- error);
+ actions, error);
if (!dev_flow)
goto error;
dev_flow->flow = flow;
- dev_flow->layers = item_flags;
- /* Store actions once as expanded flows have same actions. */
- if (i == 0)
- flow->actions = action_flags;
- assert(flow->actions == action_flags);
LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
ret = flow_drv_translate(dev, dev_flow, attr,
buf->entry[i].pattern,
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 2a3ce44b0b..51ab47fe44 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -302,8 +302,7 @@ typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
struct rte_flow_error *error);
typedef struct mlx5_flow *(*mlx5_flow_prepare_t)
(const struct rte_flow_attr *attr, const struct rte_flow_item items[],
- const struct rte_flow_action actions[], uint64_t *item_flags,
- uint64_t *action_flags, struct rte_flow_error *error);
+ const struct rte_flow_action actions[], struct rte_flow_error *error);
typedef int (*mlx5_flow_translate_t)(struct rte_eth_dev *dev,
struct mlx5_flow *dev_flow,
const struct rte_flow_attr *attr,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b4d5956ba..7909615360 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1014,10 +1014,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1029,8 +1025,6 @@ static struct mlx5_flow *
flow_dv_prepare(const struct rte_flow_attr *attr __rte_unused,
const struct rte_flow_item items[] __rte_unused,
const struct rte_flow_action actions[] __rte_unused,
- uint64_t *item_flags __rte_unused,
- uint64_t *action_flags __rte_unused,
struct rte_flow_error *error)
{
uint32_t size = sizeof(struct mlx5_flow);
diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c b/drivers/net/mlx5/mlx5_flow_tcf.c
index ee614b3f1d..fb817b2311 100644
--- a/drivers/net/mlx5/mlx5_flow_tcf.c
+++ b/drivers/net/mlx5/mlx5_flow_tcf.c
@@ -2370,24 +2370,21 @@ flow_tcf_validate(struct rte_eth_dev *dev,
}
/**
- * Calculate maximum size of memory for flow items of Linux TC flower and
- * extract specified items.
+ * Calculate maximum size of memory for flow items of Linux TC flower.
*
+ * @param[in] attr
+ * Pointer to the flow attributes.
* @param[in] items
* Pointer to the list of items.
- * @param[out] item_flags
- * Pointer to the detected items.
*
* @return
* Maximum size of memory for items.
*/
static int
-flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
- const struct rte_flow_item items[],
- uint64_t *item_flags)
+flow_tcf_get_items_size(const struct rte_flow_attr *attr,
+ const struct rte_flow_item items[])
{
int size = 0;
- uint64_t flags = 0;
size += SZ_NLATTR_STRZ_OF("flower") +
SZ_NLATTR_NEST + /* TCA_OPTIONS. */
@@ -2404,7 +2401,6 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
SZ_NLATTR_DATA_OF(ETHER_ADDR_LEN) * 4;
/* dst/src MAC addr and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L2;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
@@ -2412,37 +2408,31 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
/* VLAN Ether type. */
SZ_NLATTR_TYPE_OF(uint8_t) + /* VLAN prio. */
SZ_NLATTR_TYPE_OF(uint16_t); /* VLAN ID. */
- flags |= MLX5_FLOW_LAYER_OUTER_VLAN;
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(uint32_t) * 4;
/* dst/src IP addr and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type. */
SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_DATA_OF(IPV6_ADDR_LEN) * 4;
/* dst/src IP addr and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(uint16_t) * 4;
/* dst/src port and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
SZ_NLATTR_TYPE_OF(uint16_t) * 4;
/* dst/src port and mask. */
- flags |= MLX5_FLOW_LAYER_OUTER_L4_TCP;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
size += SZ_NLATTR_TYPE_OF(uint32_t);
- flags |= MLX5_FLOW_LAYER_VXLAN;
break;
default:
DRV_LOG(WARNING,
@@ -2452,7 +2442,6 @@ flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
break;
}
}
- *item_flags = flags;
return size;
}
@@ -2668,10 +2657,6 @@ flow_tcf_nl_brand(struct nlmsghdr *nlh, uint32_t handle)
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -2683,7 +2668,6 @@ static struct mlx5_flow *
flow_tcf_prepare(const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
const struct rte_flow_action actions[],
- uint64_t *item_flags, uint64_t *action_flags,
struct rte_flow_error *error)
{
size_t size = RTE_ALIGN_CEIL
@@ -2692,12 +2676,13 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
MNL_ALIGN(sizeof(struct nlmsghdr)) +
MNL_ALIGN(sizeof(struct tcmsg));
struct mlx5_flow *dev_flow;
+ uint64_t action_flags = 0;
struct nlmsghdr *nlh;
struct tcmsg *tcm;
uint8_t *sp, *tun = NULL;
- size += flow_tcf_get_items_and_size(attr, items, item_flags);
- size += flow_tcf_get_actions_and_size(actions, action_flags);
+ size += flow_tcf_get_items_size(attr, items);
+ size += flow_tcf_get_actions_and_size(actions, &action_flags);
dev_flow = rte_zmalloc(__func__, size, MNL_ALIGNTO);
if (!dev_flow) {
rte_flow_error_set(error, ENOMEM,
@@ -2706,7 +2691,7 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
return NULL;
}
sp = (uint8_t *)(dev_flow + 1);
- if (*action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP) {
+ if (action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP) {
sp = RTE_PTR_ALIGN
(sp, alignof(struct flow_tcf_tunnel_hdr));
tun = sp;
@@ -2718,7 +2703,7 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
(sizeof(struct flow_tcf_vxlan_encap),
MNL_ALIGNTO);
#endif
- } else if (*action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
+ } else if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
sp = RTE_PTR_ALIGN
(sp, alignof(struct flow_tcf_tunnel_hdr));
tun = sp;
@@ -2747,9 +2732,9 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
.tcm = tcm,
},
};
- if (*action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP)
+ if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP)
dev_flow->tcf.tunnel->type = FLOW_TCF_TUNACT_VXLAN_DECAP;
- else if (*action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP)
+ else if (action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP)
dev_flow->tcf.tunnel->type = FLOW_TCF_TUNACT_VXLAN_ENCAP;
/*
* Generate a reasonably unique handle based on the address of the
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 43fcd0d29e..699cc88c8c 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1209,23 +1209,18 @@ flow_verbs_validate(struct rte_eth_dev *dev,
/**
* Calculate the required bytes that are needed for the action part of the verbs
- * flow, in addtion returns bit-fields with all the detected action, in order to
- * avoid another interation over the actions.
+ * flow.
*
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] action_flags
- * Pointer to the detected actions.
*
* @return
* The size of the memory needed for all actions.
*/
static int
-flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
- uint64_t *action_flags)
+flow_verbs_get_actions_size(const struct rte_flow_action actions[])
{
int size = 0;
- uint64_t detected_actions = 0;
for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
switch (actions->type) {
@@ -1233,128 +1228,89 @@ flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
break;
case RTE_FLOW_ACTION_TYPE_FLAG:
size += sizeof(struct ibv_flow_spec_action_tag);
- detected_actions |= MLX5_FLOW_ACTION_FLAG;
break;
case RTE_FLOW_ACTION_TYPE_MARK:
size += sizeof(struct ibv_flow_spec_action_tag);
- detected_actions |= MLX5_FLOW_ACTION_MARK;
break;
case RTE_FLOW_ACTION_TYPE_DROP:
size += sizeof(struct ibv_flow_spec_action_drop);
- detected_actions |= MLX5_FLOW_ACTION_DROP;
break;
case RTE_FLOW_ACTION_TYPE_QUEUE:
- detected_actions |= MLX5_FLOW_ACTION_QUEUE;
break;
case RTE_FLOW_ACTION_TYPE_RSS:
- detected_actions |= MLX5_FLOW_ACTION_RSS;
break;
case RTE_FLOW_ACTION_TYPE_COUNT:
#if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
size += sizeof(struct ibv_flow_spec_counter_action);
#endif
- detected_actions |= MLX5_FLOW_ACTION_COUNT;
break;
default:
break;
}
}
- *action_flags = detected_actions;
return size;
}
/**
* Calculate the required bytes that are needed for the item part of the verbs
- * flow, in addtion returns bit-fields with all the detected action, in order to
- * avoid another interation over the actions.
+ * flow.
*
- * @param[in] actions
+ * @param[in] items
* Pointer to the list of items.
- * @param[in, out] item_flags
- * Pointer to the detected items.
*
* @return
* The size of the memory needed for all items.
*/
static int
-flow_verbs_get_items_and_size(const struct rte_flow_item items[],
- uint64_t *item_flags)
+flow_verbs_get_items_size(const struct rte_flow_item items[])
{
int size = 0;
- uint64_t detected_items = 0;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
- int tunnel = !!(detected_items & MLX5_FLOW_LAYER_TUNNEL);
-
switch (items->type) {
case RTE_FLOW_ITEM_TYPE_VOID:
break;
case RTE_FLOW_ITEM_TYPE_ETH:
size += sizeof(struct ibv_flow_spec_eth);
- detected_items |= tunnel ? MLX5_FLOW_LAYER_INNER_L2 :
- MLX5_FLOW_LAYER_OUTER_L2;
break;
case RTE_FLOW_ITEM_TYPE_VLAN:
size += sizeof(struct ibv_flow_spec_eth);
- detected_items |=
- tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
- MLX5_FLOW_LAYER_INNER_VLAN) :
- (MLX5_FLOW_LAYER_OUTER_L2 |
- MLX5_FLOW_LAYER_OUTER_VLAN);
break;
case RTE_FLOW_ITEM_TYPE_IPV4:
size += sizeof(struct ibv_flow_spec_ipv4_ext);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L3_IPV4 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV4;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
size += sizeof(struct ibv_flow_spec_ipv6);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L3_IPV6 :
- MLX5_FLOW_LAYER_OUTER_L3_IPV6;
break;
case RTE_FLOW_ITEM_TYPE_UDP:
size += sizeof(struct ibv_flow_spec_tcp_udp);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L4_UDP :
- MLX5_FLOW_LAYER_OUTER_L4_UDP;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
size += sizeof(struct ibv_flow_spec_tcp_udp);
- detected_items |= tunnel ?
- MLX5_FLOW_LAYER_INNER_L4_TCP :
- MLX5_FLOW_LAYER_OUTER_L4_TCP;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
size += sizeof(struct ibv_flow_spec_tunnel);
- detected_items |= MLX5_FLOW_LAYER_VXLAN;
break;
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
size += sizeof(struct ibv_flow_spec_tunnel);
- detected_items |= MLX5_FLOW_LAYER_VXLAN_GPE;
break;
#ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
case RTE_FLOW_ITEM_TYPE_GRE:
size += sizeof(struct ibv_flow_spec_gre);
- detected_items |= MLX5_FLOW_LAYER_GRE;
break;
case RTE_FLOW_ITEM_TYPE_MPLS:
size += sizeof(struct ibv_flow_spec_mpls);
- detected_items |= MLX5_FLOW_LAYER_MPLS;
break;
#else
case RTE_FLOW_ITEM_TYPE_GRE:
size += sizeof(struct ibv_flow_spec_tunnel);
- detected_items |= MLX5_FLOW_LAYER_TUNNEL;
break;
#endif
default:
break;
}
}
- *item_flags = detected_items;
return size;
}
@@ -1369,10 +1325,6 @@ flow_verbs_get_items_and_size(const struct rte_flow_item items[],
* Pointer to the list of items.
* @param[in] actions
* Pointer to the list of actions.
- * @param[out] item_flags
- * Pointer to bit mask of all items detected.
- * @param[out] action_flags
- * Pointer to bit mask of all actions detected.
* @param[out] error
* Pointer to the error structure.
*
@@ -1384,15 +1336,13 @@ static struct mlx5_flow *
flow_verbs_prepare(const struct rte_flow_attr *attr __rte_unused,
const struct rte_flow_item items[],
const struct rte_flow_action actions[],
- uint64_t *item_flags,
- uint64_t *action_flags,
struct rte_flow_error *error)
{
uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
struct mlx5_flow *flow;
- size += flow_verbs_get_actions_and_size(actions, action_flags);
- size += flow_verbs_get_items_and_size(items, item_flags);
+ size += flow_verbs_get_actions_size(actions);
+ size += flow_verbs_get_items_size(items);
flow = rte_calloc(__func__, 1, size, 0);
if (!flow) {
rte_flow_error_set(error, ENOMEM,
--
2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/3] net/mlx5: fix Direct Verbs flow tunnel
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: fix Direct " Yongseok Koh
@ 2018-11-05 7:31 ` Ori Kam
0 siblings, 0 replies; 17+ messages in thread
From: Ori Kam @ 2018-11-05 7:31 UTC (permalink / raw)
To: Yongseok Koh, Shahaf Shuler; +Cc: dev
> -----Original Message-----
> From: Yongseok Koh
> Sent: Monday, November 5, 2018 9:21 AM
> To: Shahaf Shuler <shahafs@mellanox.com>
> Cc: dev@dpdk.org; Ori Kam <orika@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>
> Subject: [PATCH v2 2/3] net/mlx5: fix Direct Verbs flow tunnel
>
> 1) Fix layer parsing
> In translation of tunneled flows, dev_flow->layers must not be used to
> check tunneled layer as it contains all the layers parsed from
> flow_drv_prepare(). Checking tunneled layer is needed to distinguish
> between outer and inner item. This should be based on dynamic parsing. With
> dev_flow->layers on a tunneled flow, items will always be interpreted as
> inner as dev_flow->layer already has all the items. Dynamic parsing
> (item_flags) is added as there's no such code.
>
> 2) Refactoring code
> - flow_dv_create_item() and flow_dv_create_action() are merged into
> flow_dv_translate() for consistency with Verbs and *_validate().
>
> Fixes: 246636411536 ("net/mlx5: fix flow tunnel handling")
> Fixes: d02cb0691299 ("net/mlx5: add Direct Verbs translate actions")
> Fixes: fc2c498ccb94 ("net/mlx5: add Direct Verbs translate items")
> Cc: orika@mellanox.com
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_flow_dv.c | 494 +++++++++++++++++++--------------------
> -
> 1 file changed, 237 insertions(+), 257 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c
> index 1d5b6bf60a..8b4d5956ba 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -1602,252 +1602,6 @@ flow_dv_translate_item_meta(void *matcher,
> void *key,
> }
> }
>
> -/**
> - * Update the matcher and the value based the selected item.
> - *
> - * @param[in, out] matcher
> - * Flow matcher.
> - * @param[in, out] key
> - * Flow matcher value.
> - * @param[in] item
> - * Flow pattern to translate.
> - * @param[in, out] dev_flow
> - * Pointer to the mlx5_flow.
> - * @param[in] inner
> - * Item is inner pattern.
> - */
> -static void
> -flow_dv_create_item(void *matcher, void *key,
> - const struct rte_flow_item *item,
> - struct mlx5_flow *dev_flow,
> - int inner)
> -{
> - struct mlx5_flow_dv_matcher *tmatcher = matcher;
> -
> - switch (item->type) {
> - case RTE_FLOW_ITEM_TYPE_ETH:
> - flow_dv_translate_item_eth(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L2;
> - break;
> - case RTE_FLOW_ITEM_TYPE_VLAN:
> - flow_dv_translate_item_vlan(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_IPV4:
> - flow_dv_translate_item_ipv4(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L3;
> - dev_flow->dv.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - MLX5_IPV4_LAYER_TYPES,
> - MLX5_IPV4_IBV_RX_HASH);
> - break;
> - case RTE_FLOW_ITEM_TYPE_IPV6:
> - flow_dv_translate_item_ipv6(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L3;
> - dev_flow->dv.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - MLX5_IPV6_LAYER_TYPES,
> - MLX5_IPV6_IBV_RX_HASH);
> - break;
> - case RTE_FLOW_ITEM_TYPE_TCP:
> - flow_dv_translate_item_tcp(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L4;
> - dev_flow->dv.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - ETH_RSS_TCP,
> -
> (IBV_RX_HASH_SRC_PORT_TCP |
> -
> IBV_RX_HASH_DST_PORT_TCP));
> - break;
> - case RTE_FLOW_ITEM_TYPE_UDP:
> - flow_dv_translate_item_udp(tmatcher->mask.buf, key, item,
> - inner);
> - tmatcher->priority = MLX5_PRIORITY_MAP_L4;
> - dev_flow->verbs.hash_fields |=
> - mlx5_flow_hashfields_adjust(dev_flow, inner,
> - ETH_RSS_UDP,
> -
> (IBV_RX_HASH_SRC_PORT_UDP |
> -
> IBV_RX_HASH_DST_PORT_UDP));
> - break;
> - case RTE_FLOW_ITEM_TYPE_GRE:
> - flow_dv_translate_item_gre(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_NVGRE:
> - flow_dv_translate_item_nvgre(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_VXLAN:
> - case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> - flow_dv_translate_item_vxlan(tmatcher->mask.buf, key, item,
> - inner);
> - break;
> - case RTE_FLOW_ITEM_TYPE_META:
> - flow_dv_translate_item_meta(tmatcher->mask.buf, key, item);
> - break;
> - default:
> - break;
> - }
> -}
> -
> -/**
> - * Store the requested actions in an array.
> - *
> - * @param[in] dev
> - * Pointer to rte_eth_dev structure.
> - * @param[in] action
> - * Flow action to translate.
> - * @param[in, out] dev_flow
> - * Pointer to the mlx5_flow.
> - * @param[in] attr
> - * Pointer to the flow attributes.
> - * @param[out] error
> - * Pointer to the error structure.
> - *
> - * @return
> - * 0 on success, a negative errno value otherwise and rte_errno is set.
> - */
> -static int
> -flow_dv_create_action(struct rte_eth_dev *dev,
> - const struct rte_flow_action *action,
> - struct mlx5_flow *dev_flow,
> - const struct rte_flow_attr *attr,
> - struct rte_flow_error *error)
> -{
> - const struct rte_flow_action_queue *queue;
> - const struct rte_flow_action_rss *rss;
> - int actions_n = dev_flow->dv.actions_n;
> - struct rte_flow *flow = dev_flow->flow;
> - const struct rte_flow_action *action_ptr = action;
> - const uint8_t *rss_key;
> -
> - switch (action->type) {
> - case RTE_FLOW_ACTION_TYPE_VOID:
> - break;
> - case RTE_FLOW_ACTION_TYPE_FLAG:
> - dev_flow->dv.actions[actions_n].type =
> MLX5DV_FLOW_ACTION_TAG;
> - dev_flow->dv.actions[actions_n].tag_value =
> - mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
> - actions_n++;
> - flow->actions |= MLX5_FLOW_ACTION_FLAG;
> - break;
> - case RTE_FLOW_ACTION_TYPE_MARK:
> - dev_flow->dv.actions[actions_n].type =
> MLX5DV_FLOW_ACTION_TAG;
> - dev_flow->dv.actions[actions_n].tag_value =
> - mlx5_flow_mark_set
> - (((const struct rte_flow_action_mark *)
> - (action->conf))->id);
> - flow->actions |= MLX5_FLOW_ACTION_MARK;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_DROP:
> - dev_flow->dv.actions[actions_n].type =
> MLX5DV_FLOW_ACTION_DROP;
> - flow->actions |= MLX5_FLOW_ACTION_DROP;
> - break;
> - case RTE_FLOW_ACTION_TYPE_QUEUE:
> - queue = action->conf;
> - flow->rss.queue_num = 1;
> - (*flow->queue)[0] = queue->index;
> - flow->actions |= MLX5_FLOW_ACTION_QUEUE;
> - break;
> - case RTE_FLOW_ACTION_TYPE_RSS:
> - rss = action->conf;
> - if (flow->queue)
> - memcpy((*flow->queue), rss->queue,
> - rss->queue_num * sizeof(uint16_t));
> - flow->rss.queue_num = rss->queue_num;
> - /* NULL RSS key indicates default RSS key. */
> - rss_key = !rss->key ? rss_hash_default_key : rss->key;
> - memcpy(flow->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
> - /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
> - flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
> - flow->rss.level = rss->level;
> - /* Added to array only in apply since we need the QP */
> - flow->actions |= MLX5_FLOW_ACTION_RSS;
> - break;
> - case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
> - case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
> - if (flow_dv_create_action_l2_encap(dev, action,
> - dev_flow, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap->verbs_action;
> - flow->actions |= action->type ==
> - RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
> - MLX5_FLOW_ACTION_VXLAN_ENCAP :
> - MLX5_FLOW_ACTION_NVGRE_ENCAP;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
> - case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
> - if (flow_dv_create_action_l2_decap(dev, dev_flow, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap->verbs_action;
> - flow->actions |= action->type ==
> - RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
> - MLX5_FLOW_ACTION_VXLAN_DECAP :
> - MLX5_FLOW_ACTION_NVGRE_DECAP;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
> - /* Handle encap action with preceding decap */
> - if (flow->actions & MLX5_FLOW_ACTION_RAW_DECAP) {
> - if (flow_dv_create_action_raw_encap(dev, action,
> - dev_flow,
> - attr, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap-
> >verbs_action;
> - } else {
> - /* Handle encap action without preceding decap */
> - if (flow_dv_create_action_l2_encap(dev, action,
> - dev_flow, error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap-
> >verbs_action;
> - }
> - flow->actions |= MLX5_FLOW_ACTION_RAW_ENCAP;
> - actions_n++;
> - break;
> - case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
> - /* Check if this decap action is followed by encap. */
> - for (; action_ptr->type != RTE_FLOW_ACTION_TYPE_END &&
> - action_ptr->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
> - action_ptr++) {
> - }
> - /* Handle decap action only if it isn't followed by encap */
> - if (action_ptr->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
> - if (flow_dv_create_action_l2_decap(dev, dev_flow,
> - error))
> - return -rte_errno;
> - dev_flow->dv.actions[actions_n].type =
> - MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> - dev_flow->dv.actions[actions_n].action =
> - dev_flow->dv.encap_decap-
> >verbs_action;
> - actions_n++;
> - }
> - /* If decap is followed by encap, handle it at encap case. */
> - flow->actions |= MLX5_FLOW_ACTION_RAW_DECAP;
> - break;
> - default:
> - break;
> - }
> - dev_flow->dv.actions_n = actions_n;
> - return 0;
> -}
> -
> static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
>
> #define HEADER_IS_ZERO(match_criteria, headers)
> \
> @@ -1989,34 +1743,260 @@ flow_dv_translate(struct rte_eth_dev *dev,
> struct rte_flow_error *error)
> {
> struct priv *priv = dev->data->dev_private;
> + struct rte_flow *flow = dev_flow->flow;
> + uint64_t item_flags = 0;
> + uint64_t action_flags = 0;
> uint64_t priority = attr->priority;
> struct mlx5_flow_dv_matcher matcher = {
> .mask = {
> .size = sizeof(matcher.mask.buf),
> },
> };
> - void *match_value = dev_flow->dv.value.buf;
> - int tunnel = 0;
> + int actions_n = 0;
>
> if (priority == MLX5_FLOW_PRIO_RSVD)
> priority = priv->config.flow_prio - 1;
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> - tunnel = !!(dev_flow->layers & MLX5_FLOW_LAYER_TUNNEL);
> - flow_dv_create_item(&matcher, match_value, items,
> dev_flow,
> - tunnel);
> + int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
> + void *match_mask = matcher.mask.buf;
> + void *match_value = dev_flow->dv.value.buf;
> +
> + switch (items->type) {
> + case RTE_FLOW_ITEM_TYPE_ETH:
> + flow_dv_translate_item_eth(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= tunnel ? MLX5_FLOW_LAYER_INNER_L2
> :
> + MLX5_FLOW_LAYER_OUTER_L2;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VLAN:
> + flow_dv_translate_item_vlan(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L2;
> + item_flags |= tunnel ? (MLX5_FLOW_LAYER_INNER_L2
> |
> +
> MLX5_FLOW_LAYER_INNER_VLAN) :
> + (MLX5_FLOW_LAYER_OUTER_L2 |
> +
> MLX5_FLOW_LAYER_OUTER_VLAN);
> + break;
> + case RTE_FLOW_ITEM_TYPE_IPV4:
> + flow_dv_translate_item_ipv4(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L3;
> + dev_flow->dv.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel,
> + MLX5_IPV4_LAYER_TYPES,
> + MLX5_IPV4_IBV_RX_HASH);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L3_IPV4 :
> +
> MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> + break;
> + case RTE_FLOW_ITEM_TYPE_IPV6:
> + flow_dv_translate_item_ipv6(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L3;
> + dev_flow->dv.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel,
> + MLX5_IPV6_LAYER_TYPES,
> + MLX5_IPV6_IBV_RX_HASH);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L3_IPV6 :
> +
> MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> + break;
> + case RTE_FLOW_ITEM_TYPE_TCP:
> + flow_dv_translate_item_tcp(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L4;
> + dev_flow->dv.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel, ETH_RSS_TCP,
> + IBV_RX_HASH_SRC_PORT_TCP |
> + IBV_RX_HASH_DST_PORT_TCP);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L4_TCP :
> +
> MLX5_FLOW_LAYER_OUTER_L4_TCP;
> + break;
> + case RTE_FLOW_ITEM_TYPE_UDP:
> + flow_dv_translate_item_udp(match_mask,
> match_value,
> + items, tunnel);
> + matcher.priority = MLX5_PRIORITY_MAP_L4;
> + dev_flow->verbs.hash_fields |=
> + mlx5_flow_hashfields_adjust
> + (dev_flow, tunnel, ETH_RSS_UDP,
> + IBV_RX_HASH_SRC_PORT_UDP |
> + IBV_RX_HASH_DST_PORT_UDP);
> + item_flags |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L4_UDP :
> +
> MLX5_FLOW_LAYER_OUTER_L4_UDP;
> + break;
> + case RTE_FLOW_ITEM_TYPE_GRE:
> + flow_dv_translate_item_gre(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_GRE;
> + break;
> + case RTE_FLOW_ITEM_TYPE_NVGRE:
> + flow_dv_translate_item_nvgre(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_GRE;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VXLAN:
> + flow_dv_translate_item_vxlan(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_VXLAN;
> + break;
> + case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> + flow_dv_translate_item_vxlan(match_mask,
> match_value,
> + items, tunnel);
> + item_flags |= MLX5_FLOW_LAYER_VXLAN_GPE;
> + break;
> + case RTE_FLOW_ITEM_TYPE_META:
> + flow_dv_translate_item_meta(match_mask,
> match_value,
> + items);
> + item_flags |= MLX5_FLOW_ITEM_METADATA;
> + break;
> + default:
> + break;
> + }
> }
> + dev_flow->layers = item_flags;
> + /* Register matcher. */
> matcher.crc = rte_raw_cksum((const void *)matcher.mask.buf,
> - matcher.mask.size);
> - if (priority == MLX5_FLOW_PRIO_RSVD)
> - priority = priv->config.flow_prio - 1;
> + matcher.mask.size);
> matcher.priority = mlx5_flow_adjust_priority(dev, priority,
> matcher.priority);
> matcher.egress = attr->egress;
> if (flow_dv_matcher_register(dev, &matcher, dev_flow, error))
> return -rte_errno;
> - for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++)
> - if (flow_dv_create_action(dev, actions, dev_flow, attr, error))
> - return -rte_errno;
> + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> + const struct rte_flow_action_queue *queue;
> + const struct rte_flow_action_rss *rss;
> + const struct rte_flow_action *action = actions;
> + const uint8_t *rss_key;
> +
> + switch (actions->type) {
> + case RTE_FLOW_ACTION_TYPE_VOID:
> + break;
> + case RTE_FLOW_ACTION_TYPE_FLAG:
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_TAG;
> + dev_flow->dv.actions[actions_n].tag_value =
> +
> mlx5_flow_mark_set(MLX5_FLOW_MARK_DEFAULT);
> + actions_n++;
> + action_flags |= MLX5_FLOW_ACTION_FLAG;
> + break;
> + case RTE_FLOW_ACTION_TYPE_MARK:
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_TAG;
> + dev_flow->dv.actions[actions_n].tag_value =
> + mlx5_flow_mark_set
> + (((const struct rte_flow_action_mark *)
> + (actions->conf))->id);
> + actions_n++;
> + action_flags |= MLX5_FLOW_ACTION_MARK;
> + break;
> + case RTE_FLOW_ACTION_TYPE_DROP:
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_DROP;
> + action_flags |= MLX5_FLOW_ACTION_DROP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_QUEUE:
> + queue = actions->conf;
> + flow->rss.queue_num = 1;
> + (*flow->queue)[0] = queue->index;
> + action_flags |= MLX5_FLOW_ACTION_QUEUE;
> + break;
> + case RTE_FLOW_ACTION_TYPE_RSS:
> + rss = actions->conf;
> + if (flow->queue)
> + memcpy((*flow->queue), rss->queue,
> + rss->queue_num * sizeof(uint16_t));
> + flow->rss.queue_num = rss->queue_num;
> + /* NULL RSS key indicates default RSS key. */
> + rss_key = !rss->key ? rss_hash_default_key : rss->key;
> + memcpy(flow->key, rss_key,
> MLX5_RSS_HASH_KEY_LEN);
> + /* RSS type 0 indicates default RSS type ETH_RSS_IP. */
> + flow->rss.types = !rss->types ? ETH_RSS_IP : rss->types;
> + flow->rss.level = rss->level;
> + action_flags |= MLX5_FLOW_ACTION_RSS;
> + break;
> + case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
> + case RTE_FLOW_ACTION_TYPE_NVGRE_ENCAP:
> + if (flow_dv_create_action_l2_encap(dev, actions,
> + dev_flow, error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap->verbs_action;
> + actions_n++;
> + action_flags |= actions->type ==
> +
> RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP ?
> + MLX5_FLOW_ACTION_VXLAN_ENCAP :
> + MLX5_FLOW_ACTION_NVGRE_ENCAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
> + case RTE_FLOW_ACTION_TYPE_NVGRE_DECAP:
> + if (flow_dv_create_action_l2_decap(dev, dev_flow,
> + error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> + MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap->verbs_action;
> + actions_n++;
> + action_flags |= actions->type ==
> +
> RTE_FLOW_ACTION_TYPE_VXLAN_DECAP ?
> + MLX5_FLOW_ACTION_VXLAN_DECAP :
> + MLX5_FLOW_ACTION_NVGRE_DECAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
> + /* Handle encap with preceding decap. */
> + if (action_flags & MLX5_FLOW_ACTION_RAW_DECAP)
> {
> + if (flow_dv_create_action_raw_encap
> + (dev, actions, dev_flow, attr, error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> +
> MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap-
> >verbs_action;
> + } else {
> + /* Handle encap without preceding decap. */
> + if (flow_dv_create_action_l2_encap(dev,
> actions,
> + dev_flow,
> + error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> +
> MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap-
> >verbs_action;
> + }
> + actions_n++;
> + action_flags |= MLX5_FLOW_ACTION_RAW_ENCAP;
> + break;
> + case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
> + /* Check if this decap is followed by encap. */
> + for (; action->type != RTE_FLOW_ACTION_TYPE_END
> &&
> + action->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP;
> + action++) {
> + }
> + /* Handle decap only if it isn't followed by encap. */
> + if (action->type !=
> RTE_FLOW_ACTION_TYPE_RAW_ENCAP) {
> + if (flow_dv_create_action_l2_decap(dev,
> + dev_flow,
> + error))
> + return -rte_errno;
> + dev_flow->dv.actions[actions_n].type =
> +
> MLX5DV_FLOW_ACTION_IBV_FLOW_ACTION;
> + dev_flow->dv.actions[actions_n].action =
> + dev_flow->dv.encap_decap-
> >verbs_action;
> + actions_n++;
> + }
> + /* If decap is followed by encap, handle it at encap. */
> + action_flags |= MLX5_FLOW_ACTION_RAW_DECAP;
> + break;
> + default:
> + break;
> + }
> + }
> + dev_flow->dv.actions_n = actions_n;
> + flow->actions = action_flags;
> return 0;
> }
>
> --
> 2.11.0
Acked-by: Ori Kam <orika@mellanox.com>
Thanks,
Ori
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
@ 2018-11-05 7:32 ` Ori Kam
0 siblings, 0 replies; 17+ messages in thread
From: Ori Kam @ 2018-11-05 7:32 UTC (permalink / raw)
To: Yongseok Koh, Shahaf Shuler; +Cc: dev
> -----Original Message-----
> From: Yongseok Koh
> Sent: Monday, November 5, 2018 9:21 AM
> To: Shahaf Shuler <shahafs@mellanox.com>
> Cc: dev@dpdk.org; Ori Kam <orika@mellanox.com>; Yongseok Koh
> <yskoh@mellanox.com>
> Subject: [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation
>
> Even though flow_drv_prepare() takes item_flags and action_flags to be
> filled in, those are not used and will be overwritten by parsing of
> flow_drv_translate(). There's no reason to keep the flags and fill it.
> Appropriate notes are added to the documentation of flow_drv_prepare() and
> flow_drv_translate().
>
> Signed-off-by: Yongseok Koh <yskoh@mellanox.com>
> ---
> drivers/net/mlx5/mlx5_flow.c | 38 ++++++++++------------
> drivers/net/mlx5/mlx5_flow.h | 3 +-
> drivers/net/mlx5/mlx5_flow_dv.c | 6 ----
> drivers/net/mlx5/mlx5_flow_tcf.c | 39 +++++++----------------
> drivers/net/mlx5/mlx5_flow_verbs.c | 64 +++++---------------------------------
> 5 files changed, 37 insertions(+), 113 deletions(-)
>
> diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
> index be2cc6b93f..3c2ac4b377 100644
> --- a/drivers/net/mlx5/mlx5_flow.c
> +++ b/drivers/net/mlx5/mlx5_flow.c
> @@ -1663,8 +1663,6 @@ static struct mlx5_flow *
> flow_null_prepare(const struct rte_flow_attr *attr __rte_unused,
> const struct rte_flow_item items[] __rte_unused,
> const struct rte_flow_action actions[] __rte_unused,
> - uint64_t *item_flags __rte_unused,
> - uint64_t *action_flags __rte_unused,
> struct rte_flow_error *error __rte_unused)
> {
> rte_errno = ENOTSUP;
> @@ -1792,16 +1790,19 @@ flow_drv_validate(struct rte_eth_dev *dev,
> * calculates the size of memory required for device flow, allocates the
> memory,
> * initializes the device flow and returns the pointer.
> *
> + * @note
> + * This function initializes device flow structure such as dv, tcf or verbs in
> + * struct mlx5_flow. However, it is caller's responsibility to initialize the
> + * rest. For example, adding returning device flow to flow->dev_flow list and
> + * setting backward reference to the flow should be done out of this function.
> + * layers field is not filled either.
> + *
> * @param[in] attr
> * Pointer to the flow attributes.
> * @param[in] items
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1809,12 +1810,10 @@ flow_drv_validate(struct rte_eth_dev *dev,
> * Pointer to device flow on success, otherwise NULL and rte_ernno is set.
> */
> static inline struct mlx5_flow *
> -flow_drv_prepare(struct rte_flow *flow,
> +flow_drv_prepare(const struct rte_flow *flow,
> const struct rte_flow_attr *attr,
> const struct rte_flow_item items[],
> const struct rte_flow_action actions[],
> - uint64_t *item_flags,
> - uint64_t *action_flags,
> struct rte_flow_error *error)
> {
> const struct mlx5_flow_driver_ops *fops;
> @@ -1822,8 +1821,7 @@ flow_drv_prepare(struct rte_flow *flow,
>
> assert(type > MLX5_FLOW_TYPE_MIN && type <
> MLX5_FLOW_TYPE_MAX);
> fops = flow_get_drv_ops(type);
> - return fops->prepare(attr, items, actions, item_flags, action_flags,
> - error);
> + return fops->prepare(attr, items, actions, error);
> }
>
> /**
> @@ -1832,6 +1830,12 @@ flow_drv_prepare(struct rte_flow *flow,
> * translates a generic flow into a driver flow. flow_drv_prepare() must
> * precede.
> *
> + * @note
> + * dev_flow->layers could be filled as a result of parsing during translation
> + * if needed by flow_drv_apply(). dev_flow->flow->actions can also be filled
> + * if necessary. As a flow can have multiple dev_flows by RSS flow expansion,
> + * flow->actions could be overwritten even though all the expanded
> dev_flows
> + * have the same actions.
> *
> * @param[in] dev
> * Pointer to the rte dev structure.
> @@ -1895,7 +1899,7 @@ flow_drv_apply(struct rte_eth_dev *dev, struct
> rte_flow *flow,
> * Flow driver remove API. This abstracts calling driver specific functions.
> * Parent flow (rte_flow) should have driver type (drv_type). It removes a flow
> * on device. All the resources of the flow should be freed by calling
> - * flow_dv_destroy().
> + * flow_drv_destroy().
> *
> * @param[in] dev
> * Pointer to Ethernet device.
> @@ -2026,8 +2030,6 @@ flow_list_create(struct rte_eth_dev *dev, struct
> mlx5_flows *list,
> {
> struct rte_flow *flow = NULL;
> struct mlx5_flow *dev_flow;
> - uint64_t action_flags = 0;
> - uint64_t item_flags = 0;
> const struct rte_flow_action_rss *rss;
> union {
> struct rte_flow_expand_rss buf;
> @@ -2070,16 +2072,10 @@ flow_list_create(struct rte_eth_dev *dev, struct
> mlx5_flows *list,
> }
> for (i = 0; i < buf->entries; ++i) {
> dev_flow = flow_drv_prepare(flow, attr, buf->entry[i].pattern,
> - actions, &item_flags, &action_flags,
> - error);
> + actions, error);
> if (!dev_flow)
> goto error;
> dev_flow->flow = flow;
> - dev_flow->layers = item_flags;
> - /* Store actions once as expanded flows have same actions. */
> - if (i == 0)
> - flow->actions = action_flags;
> - assert(flow->actions == action_flags);
> LIST_INSERT_HEAD(&flow->dev_flows, dev_flow, next);
> ret = flow_drv_translate(dev, dev_flow, attr,
> buf->entry[i].pattern,
> diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
> index 2a3ce44b0b..51ab47fe44 100644
> --- a/drivers/net/mlx5/mlx5_flow.h
> +++ b/drivers/net/mlx5/mlx5_flow.h
> @@ -302,8 +302,7 @@ typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev
> *dev,
> struct rte_flow_error *error);
> typedef struct mlx5_flow *(*mlx5_flow_prepare_t)
> (const struct rte_flow_attr *attr, const struct rte_flow_item items[],
> - const struct rte_flow_action actions[], uint64_t *item_flags,
> - uint64_t *action_flags, struct rte_flow_error *error);
> + const struct rte_flow_action actions[], struct rte_flow_error *error);
> typedef int (*mlx5_flow_translate_t)(struct rte_eth_dev *dev,
> struct mlx5_flow *dev_flow,
> const struct rte_flow_attr *attr,
> diff --git a/drivers/net/mlx5/mlx5_flow_dv.c
> b/drivers/net/mlx5/mlx5_flow_dv.c
> index 8b4d5956ba..7909615360 100644
> --- a/drivers/net/mlx5/mlx5_flow_dv.c
> +++ b/drivers/net/mlx5/mlx5_flow_dv.c
> @@ -1014,10 +1014,6 @@ flow_dv_validate(struct rte_eth_dev *dev, const
> struct rte_flow_attr *attr,
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1029,8 +1025,6 @@ static struct mlx5_flow *
> flow_dv_prepare(const struct rte_flow_attr *attr __rte_unused,
> const struct rte_flow_item items[] __rte_unused,
> const struct rte_flow_action actions[] __rte_unused,
> - uint64_t *item_flags __rte_unused,
> - uint64_t *action_flags __rte_unused,
> struct rte_flow_error *error)
> {
> uint32_t size = sizeof(struct mlx5_flow);
> diff --git a/drivers/net/mlx5/mlx5_flow_tcf.c
> b/drivers/net/mlx5/mlx5_flow_tcf.c
> index ee614b3f1d..fb817b2311 100644
> --- a/drivers/net/mlx5/mlx5_flow_tcf.c
> +++ b/drivers/net/mlx5/mlx5_flow_tcf.c
> @@ -2370,24 +2370,21 @@ flow_tcf_validate(struct rte_eth_dev *dev,
> }
>
> /**
> - * Calculate maximum size of memory for flow items of Linux TC flower and
> - * extract specified items.
> + * Calculate maximum size of memory for flow items of Linux TC flower.
> *
> + * @param[in] attr
> + * Pointer to the flow attributes.
> * @param[in] items
> * Pointer to the list of items.
> - * @param[out] item_flags
> - * Pointer to the detected items.
> *
> * @return
> * Maximum size of memory for items.
> */
> static int
> -flow_tcf_get_items_and_size(const struct rte_flow_attr *attr,
> - const struct rte_flow_item items[],
> - uint64_t *item_flags)
> +flow_tcf_get_items_size(const struct rte_flow_attr *attr,
> + const struct rte_flow_item items[])
> {
> int size = 0;
> - uint64_t flags = 0;
>
> size += SZ_NLATTR_STRZ_OF("flower") +
> SZ_NLATTR_NEST + /* TCA_OPTIONS. */
> @@ -2404,7 +2401,6 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> SZ_NLATTR_DATA_OF(ETHER_ADDR_LEN) * 4;
> /* dst/src MAC addr and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L2;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> @@ -2412,37 +2408,31 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> /* VLAN Ether type. */
> SZ_NLATTR_TYPE_OF(uint8_t) + /* VLAN prio.
> */
> SZ_NLATTR_TYPE_OF(uint16_t); /* VLAN ID. */
> - flags |= MLX5_FLOW_LAYER_OUTER_VLAN;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(uint32_t) * 4;
> /* dst/src IP addr and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV6:
> size += SZ_NLATTR_TYPE_OF(uint16_t) + /* Ether type.
> */
> SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_DATA_OF(IPV6_ADDR_LEN) * 4;
> /* dst/src IP addr and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> break;
> case RTE_FLOW_ITEM_TYPE_UDP:
> size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(uint16_t) * 4;
> /* dst/src port and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L4_UDP;
> break;
> case RTE_FLOW_ITEM_TYPE_TCP:
> size += SZ_NLATTR_TYPE_OF(uint8_t) + /* IP proto. */
> SZ_NLATTR_TYPE_OF(uint16_t) * 4;
> /* dst/src port and mask. */
> - flags |= MLX5_FLOW_LAYER_OUTER_L4_TCP;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN:
> size += SZ_NLATTR_TYPE_OF(uint32_t);
> - flags |= MLX5_FLOW_LAYER_VXLAN;
> break;
> default:
> DRV_LOG(WARNING,
> @@ -2452,7 +2442,6 @@ flow_tcf_get_items_and_size(const struct
> rte_flow_attr *attr,
> break;
> }
> }
> - *item_flags = flags;
> return size;
> }
>
> @@ -2668,10 +2657,6 @@ flow_tcf_nl_brand(struct nlmsghdr *nlh, uint32_t
> handle)
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -2683,7 +2668,6 @@ static struct mlx5_flow *
> flow_tcf_prepare(const struct rte_flow_attr *attr,
> const struct rte_flow_item items[],
> const struct rte_flow_action actions[],
> - uint64_t *item_flags, uint64_t *action_flags,
> struct rte_flow_error *error)
> {
> size_t size = RTE_ALIGN_CEIL
> @@ -2692,12 +2676,13 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
> MNL_ALIGN(sizeof(struct nlmsghdr)) +
> MNL_ALIGN(sizeof(struct tcmsg));
> struct mlx5_flow *dev_flow;
> + uint64_t action_flags = 0;
> struct nlmsghdr *nlh;
> struct tcmsg *tcm;
> uint8_t *sp, *tun = NULL;
>
> - size += flow_tcf_get_items_and_size(attr, items, item_flags);
> - size += flow_tcf_get_actions_and_size(actions, action_flags);
> + size += flow_tcf_get_items_size(attr, items);
> + size += flow_tcf_get_actions_and_size(actions, &action_flags);
> dev_flow = rte_zmalloc(__func__, size, MNL_ALIGNTO);
> if (!dev_flow) {
> rte_flow_error_set(error, ENOMEM,
> @@ -2706,7 +2691,7 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
> return NULL;
> }
> sp = (uint8_t *)(dev_flow + 1);
> - if (*action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP) {
> + if (action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP) {
> sp = RTE_PTR_ALIGN
> (sp, alignof(struct flow_tcf_tunnel_hdr));
> tun = sp;
> @@ -2718,7 +2703,7 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
> (sizeof(struct flow_tcf_vxlan_encap),
> MNL_ALIGNTO);
> #endif
> - } else if (*action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
> + } else if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP) {
> sp = RTE_PTR_ALIGN
> (sp, alignof(struct flow_tcf_tunnel_hdr));
> tun = sp;
> @@ -2747,9 +2732,9 @@ flow_tcf_prepare(const struct rte_flow_attr *attr,
> .tcm = tcm,
> },
> };
> - if (*action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP)
> + if (action_flags & MLX5_FLOW_ACTION_VXLAN_DECAP)
> dev_flow->tcf.tunnel->type =
> FLOW_TCF_TUNACT_VXLAN_DECAP;
> - else if (*action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP)
> + else if (action_flags & MLX5_FLOW_ACTION_VXLAN_ENCAP)
> dev_flow->tcf.tunnel->type =
> FLOW_TCF_TUNACT_VXLAN_ENCAP;
> /*
> * Generate a reasonably unique handle based on the address of the
> diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c
> b/drivers/net/mlx5/mlx5_flow_verbs.c
> index 43fcd0d29e..699cc88c8c 100644
> --- a/drivers/net/mlx5/mlx5_flow_verbs.c
> +++ b/drivers/net/mlx5/mlx5_flow_verbs.c
> @@ -1209,23 +1209,18 @@ flow_verbs_validate(struct rte_eth_dev *dev,
>
> /**
> * Calculate the required bytes that are needed for the action part of the verbs
> - * flow, in addtion returns bit-fields with all the detected action, in order to
> - * avoid another interation over the actions.
> + * flow.
> *
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] action_flags
> - * Pointer to the detected actions.
> *
> * @return
> * The size of the memory needed for all actions.
> */
> static int
> -flow_verbs_get_actions_and_size(const struct rte_flow_action actions[],
> - uint64_t *action_flags)
> +flow_verbs_get_actions_size(const struct rte_flow_action actions[])
> {
> int size = 0;
> - uint64_t detected_actions = 0;
>
> for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
> switch (actions->type) {
> @@ -1233,128 +1228,89 @@ flow_verbs_get_actions_and_size(const struct
> rte_flow_action actions[],
> break;
> case RTE_FLOW_ACTION_TYPE_FLAG:
> size += sizeof(struct ibv_flow_spec_action_tag);
> - detected_actions |= MLX5_FLOW_ACTION_FLAG;
> break;
> case RTE_FLOW_ACTION_TYPE_MARK:
> size += sizeof(struct ibv_flow_spec_action_tag);
> - detected_actions |= MLX5_FLOW_ACTION_MARK;
> break;
> case RTE_FLOW_ACTION_TYPE_DROP:
> size += sizeof(struct ibv_flow_spec_action_drop);
> - detected_actions |= MLX5_FLOW_ACTION_DROP;
> break;
> case RTE_FLOW_ACTION_TYPE_QUEUE:
> - detected_actions |= MLX5_FLOW_ACTION_QUEUE;
> break;
> case RTE_FLOW_ACTION_TYPE_RSS:
> - detected_actions |= MLX5_FLOW_ACTION_RSS;
> break;
> case RTE_FLOW_ACTION_TYPE_COUNT:
> #if defined(HAVE_IBV_DEVICE_COUNTERS_SET_V42) || \
> defined(HAVE_IBV_DEVICE_COUNTERS_SET_V45)
> size += sizeof(struct ibv_flow_spec_counter_action);
> #endif
> - detected_actions |= MLX5_FLOW_ACTION_COUNT;
> break;
> default:
> break;
> }
> }
> - *action_flags = detected_actions;
> return size;
> }
>
> /**
> * Calculate the required bytes that are needed for the item part of the verbs
> - * flow, in addtion returns bit-fields with all the detected action, in order to
> - * avoid another interation over the actions.
> + * flow.
> *
> - * @param[in] actions
> + * @param[in] items
> * Pointer to the list of items.
> - * @param[in, out] item_flags
> - * Pointer to the detected items.
> *
> * @return
> * The size of the memory needed for all items.
> */
> static int
> -flow_verbs_get_items_and_size(const struct rte_flow_item items[],
> - uint64_t *item_flags)
> +flow_verbs_get_items_size(const struct rte_flow_item items[])
> {
> int size = 0;
> - uint64_t detected_items = 0;
>
> for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
> - int tunnel = !!(detected_items &
> MLX5_FLOW_LAYER_TUNNEL);
> -
> switch (items->type) {
> case RTE_FLOW_ITEM_TYPE_VOID:
> break;
> case RTE_FLOW_ITEM_TYPE_ETH:
> size += sizeof(struct ibv_flow_spec_eth);
> - detected_items |= tunnel ?
> MLX5_FLOW_LAYER_INNER_L2 :
> -
> MLX5_FLOW_LAYER_OUTER_L2;
> break;
> case RTE_FLOW_ITEM_TYPE_VLAN:
> size += sizeof(struct ibv_flow_spec_eth);
> - detected_items |=
> - tunnel ? (MLX5_FLOW_LAYER_INNER_L2 |
> - MLX5_FLOW_LAYER_INNER_VLAN) :
> - (MLX5_FLOW_LAYER_OUTER_L2 |
> - MLX5_FLOW_LAYER_OUTER_VLAN);
> break;
> case RTE_FLOW_ITEM_TYPE_IPV4:
> size += sizeof(struct ibv_flow_spec_ipv4_ext);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L3_IPV4
> :
> -
> MLX5_FLOW_LAYER_OUTER_L3_IPV4;
> break;
> case RTE_FLOW_ITEM_TYPE_IPV6:
> size += sizeof(struct ibv_flow_spec_ipv6);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L3_IPV6
> :
> -
> MLX5_FLOW_LAYER_OUTER_L3_IPV6;
> break;
> case RTE_FLOW_ITEM_TYPE_UDP:
> size += sizeof(struct ibv_flow_spec_tcp_udp);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L4_UDP
> :
> -
> MLX5_FLOW_LAYER_OUTER_L4_UDP;
> break;
> case RTE_FLOW_ITEM_TYPE_TCP:
> size += sizeof(struct ibv_flow_spec_tcp_udp);
> - detected_items |= tunnel ?
> - MLX5_FLOW_LAYER_INNER_L4_TCP :
> -
> MLX5_FLOW_LAYER_OUTER_L4_TCP;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN:
> size += sizeof(struct ibv_flow_spec_tunnel);
> - detected_items |= MLX5_FLOW_LAYER_VXLAN;
> break;
> case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
> size += sizeof(struct ibv_flow_spec_tunnel);
> - detected_items |= MLX5_FLOW_LAYER_VXLAN_GPE;
> break;
> #ifdef HAVE_IBV_DEVICE_MPLS_SUPPORT
> case RTE_FLOW_ITEM_TYPE_GRE:
> size += sizeof(struct ibv_flow_spec_gre);
> - detected_items |= MLX5_FLOW_LAYER_GRE;
> break;
> case RTE_FLOW_ITEM_TYPE_MPLS:
> size += sizeof(struct ibv_flow_spec_mpls);
> - detected_items |= MLX5_FLOW_LAYER_MPLS;
> break;
> #else
> case RTE_FLOW_ITEM_TYPE_GRE:
> size += sizeof(struct ibv_flow_spec_tunnel);
> - detected_items |= MLX5_FLOW_LAYER_TUNNEL;
> break;
> #endif
> default:
> break;
> }
> }
> - *item_flags = detected_items;
> return size;
> }
>
> @@ -1369,10 +1325,6 @@ flow_verbs_get_items_and_size(const struct
> rte_flow_item items[],
> * Pointer to the list of items.
> * @param[in] actions
> * Pointer to the list of actions.
> - * @param[out] item_flags
> - * Pointer to bit mask of all items detected.
> - * @param[out] action_flags
> - * Pointer to bit mask of all actions detected.
> * @param[out] error
> * Pointer to the error structure.
> *
> @@ -1384,15 +1336,13 @@ static struct mlx5_flow *
> flow_verbs_prepare(const struct rte_flow_attr *attr __rte_unused,
> const struct rte_flow_item items[],
> const struct rte_flow_action actions[],
> - uint64_t *item_flags,
> - uint64_t *action_flags,
> struct rte_flow_error *error)
> {
> uint32_t size = sizeof(struct mlx5_flow) + sizeof(struct ibv_flow_attr);
> struct mlx5_flow *flow;
>
> - size += flow_verbs_get_actions_and_size(actions, action_flags);
> - size += flow_verbs_get_items_and_size(items, item_flags);
> + size += flow_verbs_get_actions_size(actions);
> + size += flow_verbs_get_items_size(items);
> flow = rte_calloc(__func__, 1, size, 0);
> if (!flow) {
> rte_flow_error_set(error, ENOMEM,
> --
> 2.11.0
Acked-by: Ori Kam <orika@mellanox.com>
Thanks,
Ori
^ permalink raw reply [flat|nested] 17+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
` (2 preceding siblings ...)
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
@ 2018-11-05 8:09 ` Shahaf Shuler
3 siblings, 0 replies; 17+ messages in thread
From: Shahaf Shuler @ 2018-11-05 8:09 UTC (permalink / raw)
To: Yongseok Koh; +Cc: dev, Ori Kam
Monday, November 5, 2018 9:21 AM, Yongseok Koh:
> Subject: [PATCH v2 0/3] net/mlx5: fix tunnel flow
>
> v2:
> * rebase on top of the latest branch tip
> * change function names appropriately
>
> Yongseok Koh (3):
> net/mlx5: fix Verbs flow tunnel
> net/mlx5: fix Direct Verbs flow tunnel
> net/mlx5: remove flags setting from flow preparation
>
> drivers/net/mlx5/mlx5_flow.c | 38 +--
> drivers/net/mlx5/mlx5_flow.h | 3 +-
> drivers/net/mlx5/mlx5_flow_dv.c | 500 ++++++++++++++---------------
> drivers/net/mlx5/mlx5_flow_tcf.c | 39 +--
> drivers/net/mlx5/mlx5_flow_verbs.c | 622 ++++++++++++++++---------------
> ------
> 5 files changed, 527 insertions(+), 675 deletions(-)
Applied to next-net-mlx, thanks.
>
> --
> 2.11.0
^ permalink raw reply [flat|nested] 17+ messages in thread
end of thread, other threads:[~2018-11-05 8:09 UTC | newest]
Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-02 21:08 [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
2018-11-02 21:08 ` [dpdk-dev] [PATCH 2/3] net/mlx5: fix Direct " Yongseok Koh
2018-11-04 8:22 ` Ori Kam
2018-11-05 5:37 ` Yongseok Koh
2018-11-05 6:08 ` Ori Kam
2018-11-05 6:43 ` Yongseok Koh
2018-11-02 21:08 ` [dpdk-dev] [PATCH 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
2018-11-04 8:29 ` Ori Kam
2018-11-05 5:39 ` Yongseok Koh
2018-11-04 8:17 ` [dpdk-dev] [PATCH 1/3] net/mlx5: fix Verbs flow tunnel Ori Kam
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 1/3] net/mlx5: fix Verbs flow tunnel Yongseok Koh
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 2/3] net/mlx5: fix Direct " Yongseok Koh
2018-11-05 7:31 ` Ori Kam
2018-11-05 7:20 ` [dpdk-dev] [PATCH v2 3/3] net/mlx5: remove flags setting from flow preparation Yongseok Koh
2018-11-05 7:32 ` Ori Kam
2018-11-05 8:09 ` [dpdk-dev] [PATCH v2 0/3] net/mlx5: fix tunnel flow Shahaf Shuler
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).