* [dpdk-dev] [PATCH v2 01/13] net/enic: remove unused code
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 02/13] net/enic: fix flow director SCTP matching Hyong Youb Kim
` (12 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim
Remove unused functions. Specifically, vnic_set_rss_key() is
obsolete. enic_{add,del}_vlan() has never been supported in the
firmware. And, remove vnic_rss.c altogether as it becomes empty. These
were discovered by cppcheck.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/Makefile | 1 -
| 23 -----------------------
| 5 -----
drivers/net/enic/enic_res.c | 26 --------------------------
drivers/net/enic/enic_res.h | 2 --
drivers/net/enic/meson.build | 1 -
6 files changed, 58 deletions(-)
delete mode 100644 drivers/net/enic/base/vnic_rss.c
diff --git a/drivers/net/enic/Makefile b/drivers/net/enic/Makefile
index e39e47631..04bae35e3 100644
--- a/drivers/net/enic/Makefile
+++ b/drivers/net/enic/Makefile
@@ -37,7 +37,6 @@ SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_wq.c
SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_dev.c
SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_intr.c
SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_rq.c
-SRCS-$(CONFIG_RTE_LIBRTE_ENIC_PMD) += base/vnic_rss.c
# The current implementation assumes 64-bit pointers
CC_AVX2_SUPPORT=0
diff --git a/drivers/net/enic/base/vnic_rss.c b/drivers/net/enic/base/vnic_rss.c
deleted file mode 100644
index f41b8660f..000000000
--- a/drivers/net/enic/base/vnic_rss.c
+++ /dev/null
@@ -1,23 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2008-2017 Cisco Systems, Inc. All rights reserved.
- * Copyright 2007 Nuova Systems, Inc. All rights reserved.
- */
-
-#include "enic_compat.h"
-#include "vnic_rss.h"
-
-void vnic_set_rss_key(union vnic_rss_key *rss_key, u8 *key)
-{
- u32 i;
- u32 *p;
- u16 *q;
-
- for (i = 0; i < 4; ++i) {
- p = (u32 *)(key + (10 * i));
- iowrite32(*p++, &rss_key->key[i].b[0]);
- iowrite32(*p++, &rss_key->key[i].b[4]);
- q = (u16 *)p;
- iowrite32(*q, &rss_key->key[i].b[8]);
- }
-}
-
--git a/drivers/net/enic/base/vnic_rss.h b/drivers/net/enic/base/vnic_rss.h
index abd7b9f13..039041ece 100644
--- a/drivers/net/enic/base/vnic_rss.h
+++ b/drivers/net/enic/base/vnic_rss.h
@@ -24,9 +24,4 @@ union vnic_rss_cpu {
u64 raw[32];
};
-void vnic_set_rss_key(union vnic_rss_key *rss_key, u8 *key);
-void vnic_set_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu);
-void vnic_get_rss_key(union vnic_rss_key *rss_key, u8 *key);
-void vnic_get_rss_cpu(union vnic_rss_cpu *rss_cpu, u8 *cpu);
-
#endif /* _VNIC_RSS_H_ */
diff --git a/drivers/net/enic/enic_res.c b/drivers/net/enic/enic_res.c
index 24b2844f3..d289f3da8 100644
--- a/drivers/net/enic/enic_res.c
+++ b/drivers/net/enic/enic_res.c
@@ -212,32 +212,6 @@ int enic_get_vnic_config(struct enic *enic)
return 0;
}
-int enic_add_vlan(struct enic *enic, u16 vlanid)
-{
- u64 a0 = vlanid, a1 = 0;
- int wait = 1000;
- int err;
-
- err = vnic_dev_cmd(enic->vdev, CMD_VLAN_ADD, &a0, &a1, wait);
- if (err)
- dev_err(enic_get_dev(enic), "Can't add vlan id, %d\n", err);
-
- return err;
-}
-
-int enic_del_vlan(struct enic *enic, u16 vlanid)
-{
- u64 a0 = vlanid, a1 = 0;
- int wait = 1000;
- int err;
-
- err = vnic_dev_cmd(enic->vdev, CMD_VLAN_DEL, &a0, &a1, wait);
- if (err)
- dev_err(enic_get_dev(enic), "Can't delete vlan id, %d\n", err);
-
- return err;
-}
-
int enic_set_nic_cfg(struct enic *enic, u8 rss_default_cpu, u8 rss_hash_type,
u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable, u8 tso_ipid_split_en,
u8 ig_vlan_strip_en)
diff --git a/drivers/net/enic/enic_res.h b/drivers/net/enic/enic_res.h
index 3786bc0e2..faaaad9bd 100644
--- a/drivers/net/enic/enic_res.h
+++ b/drivers/net/enic/enic_res.h
@@ -59,8 +59,6 @@
struct enic;
int enic_get_vnic_config(struct enic *);
-int enic_add_vlan(struct enic *enic, u16 vlanid);
-int enic_del_vlan(struct enic *enic, u16 vlanid);
int enic_set_nic_cfg(struct enic *enic, u8 rss_default_cpu, u8 rss_hash_type,
u8 rss_hash_bits, u8 rss_base_cpu, u8 rss_enable, u8 tso_ipid_split_en,
u8 ig_vlan_strip_en);
diff --git a/drivers/net/enic/meson.build b/drivers/net/enic/meson.build
index 064487118..c381f1496 100644
--- a/drivers/net/enic/meson.build
+++ b/drivers/net/enic/meson.build
@@ -6,7 +6,6 @@ sources = files(
'base/vnic_dev.c',
'base/vnic_intr.c',
'base/vnic_rq.c',
- 'base/vnic_rss.c',
'base/vnic_wq.c',
'enic_clsf.c',
'enic_ethdev.c',
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 02/13] net/enic: fix flow director SCTP matching
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 01/13] net/enic: remove unused code Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 03/13] net/enic: fix SCTP match for flow API Hyong Youb Kim
` (11 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
The firmware filter API does not have flags indicating "match SCTP
packet". Instead, the driver needs to explicitly add an IP match and
set the protocol number (132 for SCTP) in the IP header.
The existing code (copy_fltr_v2) has two bugs.
1. It sets the protocol number (132) in the match value, but not the
mask. The mask remains 0, so the match becomes a wildcard match. The
NIC ends up matching all protocol numbers (i.e. thinks non-SCTP
packets are SCTP).
2. It modifies the input argument (rte_eth_fdir_input). The driver
tracks filters using rte_hash_{add,del}_key(input). So, addding
(RTE_ETH_FILTER_ADD) and deleting (RTE_ETH_FILTER_DELETE) must use the
same input argument for the same filter. But, overwriting the protocol
number while adding the filter breaks this assumption, and causes
delete operation to fail.
So, set the mask as well as protocol value. Do not modify the input
argument, and use const in function signatures to make the intention
clear. Also move a couple function declarations to enic_clsf.c from
enic.h as they are strictly local.
Fixes: dfbd6a9cb504 ("net/enic: extend flow director support for 1300 series")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/enic.h | 8 ++------
drivers/net/enic/enic_clsf.c | 38 ++++++++++++++++++++++++++------------
2 files changed, 28 insertions(+), 18 deletions(-)
diff --git a/drivers/net/enic/enic.h b/drivers/net/enic/enic.h
index 6c497e9a2..fa4d5590e 100644
--- a/drivers/net/enic/enic.h
+++ b/drivers/net/enic/enic.h
@@ -76,8 +76,8 @@ struct enic_fdir {
u32 modes;
u32 types_mask;
void (*copy_fltr_fn)(struct filter_v2 *filt,
- struct rte_eth_fdir_input *input,
- struct rte_eth_fdir_masks *masks);
+ const struct rte_eth_fdir_input *input,
+ const struct rte_eth_fdir_masks *masks);
};
struct enic_soft_stats {
@@ -342,9 +342,5 @@ int enic_link_update(struct enic *enic);
bool enic_use_vector_rx_handler(struct enic *enic);
void enic_fdir_info(struct enic *enic);
void enic_fdir_info_get(struct enic *enic, struct rte_eth_fdir_info *stats);
-void copy_fltr_v1(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
- struct rte_eth_fdir_masks *masks);
-void copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
- struct rte_eth_fdir_masks *masks);
extern const struct rte_flow_ops enic_flow_ops;
#endif /* _ENIC_H_ */
diff --git a/drivers/net/enic/enic_clsf.c b/drivers/net/enic/enic_clsf.c
index 9e9e548c2..48c8e6264 100644
--- a/drivers/net/enic/enic_clsf.c
+++ b/drivers/net/enic/enic_clsf.c
@@ -36,6 +36,13 @@
#define ENICPMD_CLSF_HASH_ENTRIES ENICPMD_FDIR_MAX
+static void copy_fltr_v1(struct filter_v2 *fltr,
+ const struct rte_eth_fdir_input *input,
+ const struct rte_eth_fdir_masks *masks);
+static void copy_fltr_v2(struct filter_v2 *fltr,
+ const struct rte_eth_fdir_input *input,
+ const struct rte_eth_fdir_masks *masks);
+
void enic_fdir_stats_get(struct enic *enic, struct rte_eth_fdir_stats *stats)
{
*stats = enic->fdir.stats;
@@ -79,9 +86,9 @@ enic_set_layer(struct filter_generic_1 *gp, unsigned int flag,
/* Copy Flow Director filter to a VIC ipv4 filter (for Cisco VICs
* without advanced filter support.
*/
-void
-copy_fltr_v1(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
- __rte_unused struct rte_eth_fdir_masks *masks)
+static void
+copy_fltr_v1(struct filter_v2 *fltr, const struct rte_eth_fdir_input *input,
+ __rte_unused const struct rte_eth_fdir_masks *masks)
{
fltr->type = FILTER_IPV4_5TUPLE;
fltr->u.ipv4.src_addr = rte_be_to_cpu_32(
@@ -104,9 +111,9 @@ copy_fltr_v1(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
/* Copy Flow Director filter to a VIC generic filter (requires advanced
* filter support.
*/
-void
-copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
- struct rte_eth_fdir_masks *masks)
+static void
+copy_fltr_v2(struct filter_v2 *fltr, const struct rte_eth_fdir_input *input,
+ const struct rte_eth_fdir_masks *masks)
{
struct filter_generic_1 *gp = &fltr->u.generic_1;
@@ -163,9 +170,11 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
sctp_val.tag = input->flow.sctp4_flow.verify_tag;
}
- /* v4 proto should be 132, override ip4_flow.proto */
- input->flow.ip4_flow.proto = 132;
-
+ /*
+ * Unlike UDP/TCP (FILTER_GENERIC_1_{UDP,TCP}), the firmware
+ * has no "packet is SCTP" flag. Use flag=0 (generic L4) and
+ * manually set proto_id=sctp below.
+ */
enic_set_layer(gp, 0, FILTER_GENERIC_1_L4, &sctp_mask,
&sctp_val, sizeof(struct sctp_hdr));
}
@@ -189,6 +198,10 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
if (input->flow.ip4_flow.proto) {
ip4_mask.next_proto_id = masks->ipv4_mask.proto;
ip4_val.next_proto_id = input->flow.ip4_flow.proto;
+ } else if (input->flow_type == RTE_ETH_FLOW_NONFRAG_IPV4_SCTP) {
+ /* Explicitly match the SCTP protocol number */
+ ip4_mask.next_proto_id = 0xff;
+ ip4_val.next_proto_id = IPPROTO_SCTP;
}
if (input->flow.ip4_flow.src_ip) {
ip4_mask.src_addr = masks->ipv4_mask.src_ip;
@@ -251,9 +264,6 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
sctp_val.tag = input->flow.sctp6_flow.verify_tag;
}
- /* v4 proto should be 132, override ipv6_flow.proto */
- input->flow.ipv6_flow.proto = 132;
-
enic_set_layer(gp, 0, FILTER_GENERIC_1_L4, &sctp_mask,
&sctp_val, sizeof(struct sctp_hdr));
}
@@ -269,6 +279,10 @@ copy_fltr_v2(struct filter_v2 *fltr, struct rte_eth_fdir_input *input,
if (input->flow.ipv6_flow.proto) {
ipv6_mask.proto = masks->ipv6_mask.proto;
ipv6_val.proto = input->flow.ipv6_flow.proto;
+ } else if (input->flow_type == RTE_ETH_FLOW_NONFRAG_IPV6_SCTP) {
+ /* See comments for IPv4 SCTP above. */
+ ipv6_mask.proto = 0xff;
+ ipv6_val.proto = IPPROTO_SCTP;
}
memcpy(ipv6_mask.src_addr, masks->ipv6_mask.src_ip,
sizeof(ipv6_mask.src_addr));
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 03/13] net/enic: fix SCTP match for flow API
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 01/13] net/enic: remove unused code Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 02/13] net/enic: fix flow director SCTP matching Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 04/13] net/enic: allow flow mark ID 0 Hyong Youb Kim
` (10 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
The driver needs to explicitly set the protocol number (132) in the IP
header pattern, as the current firmware filter API lacks "match SCTP
packet" flag. Otherwise, the resulting NIC filter may lead to false
positives (i.e. NIC reporting non-SCTP packets as SCTP packets). The
flow director handler does the same (enic_clsf.c).
Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/enic_flow.c | 28 ++++++++++++++++++++++++++--
1 file changed, 26 insertions(+), 2 deletions(-)
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index bb9ed037a..55d8d50a1 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -70,7 +70,6 @@ static enic_copy_item_fn enic_copy_item_ipv6_v2;
static enic_copy_item_fn enic_copy_item_udp_v2;
static enic_copy_item_fn enic_copy_item_tcp_v2;
static enic_copy_item_fn enic_copy_item_sctp_v2;
-static enic_copy_item_fn enic_copy_item_sctp_v2;
static enic_copy_item_fn enic_copy_item_vxlan_v2;
static copy_action_fn enic_copy_action_v1;
static copy_action_fn enic_copy_action_v2;
@@ -237,7 +236,7 @@ static const struct enic_items enic_items_v3[] = {
},
[RTE_FLOW_ITEM_TYPE_SCTP] = {
.copy_item = enic_copy_item_sctp_v2,
- .valid_start_item = 1,
+ .valid_start_item = 0,
.prev_items = (const enum rte_flow_item_type[]) {
RTE_FLOW_ITEM_TYPE_IPV4,
RTE_FLOW_ITEM_TYPE_IPV6,
@@ -819,12 +818,37 @@ enic_copy_item_sctp_v2(const struct rte_flow_item *item,
const struct rte_flow_item_sctp *spec = item->spec;
const struct rte_flow_item_sctp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
+ uint8_t *ip_proto_mask = NULL;
+ uint8_t *ip_proto = NULL;
FLOW_TRACE();
if (*inner_ofst)
return ENOTSUP;
+ /*
+ * The NIC filter API has no flags for "match sctp", so explicitly set
+ * the protocol number in the IP pattern.
+ */
+ if (gp->val_flags & FILTER_GENERIC_1_IPV4) {
+ struct ipv4_hdr *ip;
+ ip = (struct ipv4_hdr *)gp->layer[FILTER_GENERIC_1_L3].mask;
+ ip_proto_mask = &ip->next_proto_id;
+ ip = (struct ipv4_hdr *)gp->layer[FILTER_GENERIC_1_L3].val;
+ ip_proto = &ip->next_proto_id;
+ } else if (gp->val_flags & FILTER_GENERIC_1_IPV6) {
+ struct ipv6_hdr *ip;
+ ip = (struct ipv6_hdr *)gp->layer[FILTER_GENERIC_1_L3].mask;
+ ip_proto_mask = &ip->proto;
+ ip = (struct ipv6_hdr *)gp->layer[FILTER_GENERIC_1_L3].val;
+ ip_proto = &ip->proto;
+ } else {
+ /* Need IPv4/IPv6 pattern first */
+ return EINVAL;
+ }
+ *ip_proto = IPPROTO_SCTP;
+ *ip_proto_mask = 0xff;
+
/* Match all if no spec */
if (!spec)
return 0;
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 04/13] net/enic: allow flow mark ID 0
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (2 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 03/13] net/enic: fix SCTP match for flow API Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 05/13] net/enic: check for unsupported flow item types Hyong Youb Kim
` (9 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
The driver currently accepts mark ID 0 but does not report it in
matching packet's mbuf. For example, the following testpmd command
succeeds. But, the mbuf of a matching IPv4 UDP packet does not have
PKT_RX_FDIR_ID set.
flow create 0 ingress pattern ... actions mark id 0 / queue index 0 / end
The problem has to do with mapping mark IDs (32-bit) to NIC filter
IDs. Filter ID is currently 16-bit, so values greater than 0xffff are
rejected. The firmware reserves filter ID 0 for filters that do not
mark (e.g. steer w/o mark). And, the driver reserves 0xffff for the
flag action. This leaves 1...0xfffe for app use.
It is possible to simply reject mark ID 0 as unsupported. But, 0 is
commonly used (e.g. OVS-DPDK and VPP). So, when adding a filter, set
filter ID = mark ID + 1 to support mark ID 0. The receive handler
subtracts 1 from filter ID to get back the original mark ID.
Fixes: dfbd6a9cb504 ("net/enic: extend flow director support for 1300 series")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
doc/guides/nics/enic.rst | 1 +
drivers/net/enic/enic_flow.c | 15 +++++++++++----
drivers/net/enic/enic_rxtx_common.h | 3 ++-
3 files changed, 14 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index bc38f51aa..e456e6c2d 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -450,6 +450,7 @@ PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
1000 for 1300 series VICs). Filters are checked for matching in the order they
were added. Since there currently is no grouping or priority support,
'catch-all' filters should be added last.
+ - The supported range of IDs for the 'MARK' action is 0 - 0xFFFD.
- **Statistics**
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index 55d8d50a1..e12a6ec73 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -1081,12 +1081,18 @@ enic_copy_action_v2(const struct rte_flow_action actions[],
if (overlap & MARK)
return ENOTSUP;
overlap |= MARK;
- /* ENIC_MAGIC_FILTER_ID is reserved and is the highest
- * in the range of allows mark ids.
+ /*
+ * Map mark ID (32-bit) to filter ID (16-bit):
+ * - Reject values > 16 bits
+ * - Filter ID 0 is reserved for filters that steer
+ * but not mark. So add 1 to the mark ID to avoid
+ * using 0.
+ * - Filter ID (ENIC_MAGIC_FILTER_ID = 0xffff) is
+ * reserved for the "flag" action below.
*/
- if (mark->id >= ENIC_MAGIC_FILTER_ID)
+ if (mark->id >= ENIC_MAGIC_FILTER_ID - 1)
return EINVAL;
- enic_action->filter_id = mark->id;
+ enic_action->filter_id = mark->id + 1;
enic_action->flags |= FILTER_ACTION_FILTER_ID_FLAG;
break;
}
@@ -1094,6 +1100,7 @@ enic_copy_action_v2(const struct rte_flow_action actions[],
if (overlap & MARK)
return ENOTSUP;
overlap |= MARK;
+ /* ENIC_MAGIC_FILTER_ID is reserved for flagging */
enic_action->filter_id = ENIC_MAGIC_FILTER_ID;
enic_action->flags |= FILTER_ACTION_FILTER_ID_FLAG;
break;
diff --git a/drivers/net/enic/enic_rxtx_common.h b/drivers/net/enic/enic_rxtx_common.h
index bfbb4909e..66f631dfe 100644
--- a/drivers/net/enic/enic_rxtx_common.h
+++ b/drivers/net/enic/enic_rxtx_common.h
@@ -226,7 +226,8 @@ enic_cq_rx_to_pkt_flags(struct cq_desc *cqd, struct rte_mbuf *mbuf)
if (filter_id) {
pkt_flags |= PKT_RX_FDIR;
if (filter_id != ENIC_MAGIC_FILTER_ID) {
- mbuf->hash.fdir.hi = clsf_cqd->filter_id;
+ /* filter_id = mark id + 1, so subtract 1 */
+ mbuf->hash.fdir.hi = filter_id - 1;
pkt_flags |= PKT_RX_FDIR_ID;
}
}
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 05/13] net/enic: check for unsupported flow item types
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (3 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 04/13] net/enic: allow flow mark ID 0 Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 06/13] net/enic: enable limited RSS flow action Hyong Youb Kim
` (8 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
Currently a pattern with an unsupported item type causes segfault,
because the flow handler is using the type as an array index without
checking bounds. Add an explicit check for unsupported item types and
avoid out-of-bound accesses.
Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
drivers/net/enic/enic_flow.c | 18 +++++++++++++++---
1 file changed, 15 insertions(+), 3 deletions(-)
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index e12a6ec73..c60476c8c 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -40,6 +40,8 @@ struct enic_items {
struct enic_filter_cap {
/** list of valid items and their handlers and attributes. */
const struct enic_items *item_info;
+ /* Max type in the above list, used to detect unsupported types */
+ enum rte_flow_item_type max_item_type;
};
/* functions for copying flow actions into enic actions */
@@ -257,12 +259,15 @@ static const struct enic_items enic_items_v3[] = {
static const struct enic_filter_cap enic_filter_cap[] = {
[FILTER_IPV4_5TUPLE] = {
.item_info = enic_items_v1,
+ .max_item_type = RTE_FLOW_ITEM_TYPE_TCP,
},
[FILTER_USNIC_IP] = {
.item_info = enic_items_v2,
+ .max_item_type = RTE_FLOW_ITEM_TYPE_VXLAN,
},
[FILTER_DPDK_1] = {
.item_info = enic_items_v3,
+ .max_item_type = RTE_FLOW_ITEM_TYPE_VXLAN,
},
};
@@ -946,7 +951,7 @@ item_stacking_valid(enum rte_flow_item_type prev_item,
*/
static int
enic_copy_filter(const struct rte_flow_item pattern[],
- const struct enic_items *items_info,
+ const struct enic_filter_cap *cap,
struct filter_v2 *enic_filter,
struct rte_flow_error *error)
{
@@ -969,7 +974,14 @@ enic_copy_filter(const struct rte_flow_item pattern[],
if (item->type == RTE_FLOW_ITEM_TYPE_VOID)
continue;
- item_info = &items_info[item->type];
+ item_info = &cap->item_info[item->type];
+ if (item->type > cap->max_item_type ||
+ item_info->copy_item == NULL) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL, "Unsupported item.");
+ return -rte_errno;
+ }
/* check to see if item stacking is valid */
if (!item_stacking_valid(prev_item, item_info, is_first_item))
@@ -1423,7 +1435,7 @@ enic_flow_parse(struct rte_eth_dev *dev,
return -rte_errno;
}
enic_filter->type = enic->flow_filter_mode;
- ret = enic_copy_filter(pattern, enic_filter_cap->item_info,
+ ret = enic_copy_filter(pattern, enic_filter_cap,
enic_filter, error);
return ret;
}
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 06/13] net/enic: enable limited RSS flow action
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (4 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 05/13] net/enic: check for unsupported flow item types Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 07/13] net/enic: enable limited PASSTHRU " Hyong Youb Kim
` (7 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim
Some apps like OVS-DPDK use MARK+RSS flow rules in order to offload
packet matching to the NIC. The RSS action in such flow rules simply
indicates "receive packet normally", not trying to override the port
wide RSS. The action is included in the flow rules simply to terminate
them, as MARK is not a fate-deciding action. And, the RSS action has a
most basic config: default hash, level, types, null key, and identity
queue mapping.
Recent VIC adapters can support these "mark and receive" flow
rules. So, enable support for RSS action for this limited use case.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
---
doc/guides/nics/enic.rst | 2 +-
doc/guides/rel_notes/release_19_05.rst | 3 +++
drivers/net/enic/enic_flow.c | 48 +++++++++++++++++++++++++++++-----
3 files changed, 46 insertions(+), 7 deletions(-)
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index e456e6c2d..ed093ef44 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -256,7 +256,7 @@ Generic Flow API is supported. The baseline support is:
- Attributes: ingress
- Items: eth, ipv4, ipv6, udp, tcp, vxlan, inner eth, ipv4, ipv6, udp, tcp
- - Actions: queue, mark, drop, flag and void
+ - Actions: queue, mark, drop, flag, rss, and void
- Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
- In total, up to 64 bytes of mask is allowed across all headers
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 4a3e2a7f3..73403c79e 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -77,6 +77,9 @@ New Features
which includes the directory name, lib name, filenames, makefile, docs,
macros, functions, structs and any other strings in the code.
+* **Updated the enic driver.**
+
+ * Added limited support for RSS.
Removed Items
-------------
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index c60476c8c..0f6b6b930 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -45,7 +45,8 @@ struct enic_filter_cap {
};
/* functions for copying flow actions into enic actions */
-typedef int (copy_action_fn)(const struct rte_flow_action actions[],
+typedef int (copy_action_fn)(struct enic *enic,
+ const struct rte_flow_action actions[],
struct filter_action_v2 *enic_action);
/* functions for copying items into enic filters */
@@ -57,8 +58,7 @@ struct enic_action_cap {
/** list of valid actions */
const enum rte_flow_action_type *actions;
/** copy function for a particular NIC */
- int (*copy_fn)(const struct rte_flow_action actions[],
- struct filter_action_v2 *enic_action);
+ copy_action_fn *copy_fn;
};
/* Forward declarations */
@@ -282,6 +282,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_id[] = {
RTE_FLOW_ACTION_TYPE_QUEUE,
RTE_FLOW_ACTION_TYPE_MARK,
RTE_FLOW_ACTION_TYPE_FLAG,
+ RTE_FLOW_ACTION_TYPE_RSS,
RTE_FLOW_ACTION_TYPE_END,
};
@@ -290,6 +291,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_drop[] = {
RTE_FLOW_ACTION_TYPE_MARK,
RTE_FLOW_ACTION_TYPE_FLAG,
RTE_FLOW_ACTION_TYPE_DROP,
+ RTE_FLOW_ACTION_TYPE_RSS,
RTE_FLOW_ACTION_TYPE_END,
};
@@ -299,6 +301,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_count[] = {
RTE_FLOW_ACTION_TYPE_FLAG,
RTE_FLOW_ACTION_TYPE_DROP,
RTE_FLOW_ACTION_TYPE_COUNT,
+ RTE_FLOW_ACTION_TYPE_RSS,
RTE_FLOW_ACTION_TYPE_END,
};
@@ -1016,7 +1019,8 @@ enic_copy_filter(const struct rte_flow_item pattern[],
* @param error[out]
*/
static int
-enic_copy_action_v1(const struct rte_flow_action actions[],
+enic_copy_action_v1(__rte_unused struct enic *enic,
+ const struct rte_flow_action actions[],
struct filter_action_v2 *enic_action)
{
enum { FATE = 1, };
@@ -1062,7 +1066,8 @@ enic_copy_action_v1(const struct rte_flow_action actions[],
* @param error[out]
*/
static int
-enic_copy_action_v2(const struct rte_flow_action actions[],
+enic_copy_action_v2(struct enic *enic,
+ const struct rte_flow_action actions[],
struct filter_action_v2 *enic_action)
{
enum { FATE = 1, MARK = 2, };
@@ -1128,6 +1133,37 @@ enic_copy_action_v2(const struct rte_flow_action actions[],
enic_action->flags |= FILTER_ACTION_COUNTER_FLAG;
break;
}
+ case RTE_FLOW_ACTION_TYPE_RSS: {
+ const struct rte_flow_action_rss *rss =
+ (const struct rte_flow_action_rss *)
+ actions->conf;
+ bool allow;
+ uint16_t i;
+
+ /*
+ * Hardware does not support general RSS actions, but
+ * we can still support the dummy one that is used to
+ * "receive normally".
+ */
+ allow = rss->func == RTE_ETH_HASH_FUNCTION_DEFAULT &&
+ rss->level == 0 &&
+ (rss->types == 0 ||
+ rss->types == enic->rss_hf) &&
+ rss->queue_num == enic->rq_count &&
+ rss->key_len == 0;
+ /* Identity queue map is ok */
+ for (i = 0; i < rss->queue_num; i++)
+ allow = allow && (i == rss->queue[i]);
+ if (!allow)
+ return ENOTSUP;
+ if (overlap & FATE)
+ return ENOTSUP;
+ /* Need MARK or FLAG */
+ if (!(overlap & MARK))
+ return ENOTSUP;
+ overlap |= FATE;
+ break;
+ }
case RTE_FLOW_ACTION_TYPE_VOID:
continue;
default:
@@ -1418,7 +1454,7 @@ enic_flow_parse(struct rte_eth_dev *dev,
action, "Invalid action.");
return -rte_errno;
}
- ret = enic_action_cap->copy_fn(actions, enic_action);
+ ret = enic_action_cap->copy_fn(enic, actions, enic_action);
if (ret) {
rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_HANDLE,
NULL, "Unsupported action.");
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 07/13] net/enic: enable limited PASSTHRU flow action
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (5 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 06/13] net/enic: enable limited RSS flow action Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 08/13] net/enic: move arguments into struct Hyong Youb Kim
` (6 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim
Some apps like VPP use PASSTHRU+MARK flow rules to offload packet
matching to the NIC. Just like MARK+RSS used by OVS-DPDK and others,
PASSTHRU+MARK is used to "mark and then receive normally". Recent VIC
adapters support such flow rules, so enable PASSTHRU for this limited
use case.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
doc/guides/nics/enic.rst | 6 +++++-
doc/guides/rel_notes/release_19_05.rst | 1 +
drivers/net/enic/enic_flow.c | 20 ++++++++++++++++++++
3 files changed, 26 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index ed093ef44..526e58ce9 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -256,7 +256,7 @@ Generic Flow API is supported. The baseline support is:
- Attributes: ingress
- Items: eth, ipv4, ipv6, udp, tcp, vxlan, inner eth, ipv4, ipv6, udp, tcp
- - Actions: queue, mark, drop, flag, rss, and void
+ - Actions: queue, mark, drop, flag, rss, passthru, and void
- Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
- In total, up to 64 bytes of mask is allowed across all headers
@@ -451,6 +451,10 @@ PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
were added. Since there currently is no grouping or priority support,
'catch-all' filters should be added last.
- The supported range of IDs for the 'MARK' action is 0 - 0xFFFD.
+ - RSS and PASSTHRU actions only support "receive normally". They are limited
+ to supporting MARK + RSS and PASSTHRU + MARK to allow the application to mark
+ packets and then receive them normally. These require 1400 series VIC adapters
+ and latest firmware.
- **Statistics**
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index 73403c79e..c8c12c47a 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -80,6 +80,7 @@ New Features
* **Updated the enic driver.**
* Added limited support for RSS.
+ * Added limited support for PASSTHRU.
Removed Items
-------------
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index 0f6b6b930..c6ed9e1b9 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -283,6 +283,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_id[] = {
RTE_FLOW_ACTION_TYPE_MARK,
RTE_FLOW_ACTION_TYPE_FLAG,
RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_PASSTHRU,
RTE_FLOW_ACTION_TYPE_END,
};
@@ -292,6 +293,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_drop[] = {
RTE_FLOW_ACTION_TYPE_FLAG,
RTE_FLOW_ACTION_TYPE_DROP,
RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_PASSTHRU,
RTE_FLOW_ACTION_TYPE_END,
};
@@ -302,6 +304,7 @@ static const enum rte_flow_action_type enic_supported_actions_v2_count[] = {
RTE_FLOW_ACTION_TYPE_DROP,
RTE_FLOW_ACTION_TYPE_COUNT,
RTE_FLOW_ACTION_TYPE_RSS,
+ RTE_FLOW_ACTION_TYPE_PASSTHRU,
RTE_FLOW_ACTION_TYPE_END,
};
@@ -1072,6 +1075,7 @@ enic_copy_action_v2(struct enic *enic,
{
enum { FATE = 1, MARK = 2, };
uint32_t overlap = 0;
+ bool passthru = false;
FLOW_TRACE();
@@ -1164,6 +1168,19 @@ enic_copy_action_v2(struct enic *enic,
overlap |= FATE;
break;
}
+ case RTE_FLOW_ACTION_TYPE_PASSTHRU: {
+ /*
+ * Like RSS above, PASSTHRU + MARK may be used to
+ * "mark and then receive normally". MARK usually comes
+ * after PASSTHRU, so remember we have seen passthru
+ * and check for mark later.
+ */
+ if (overlap & FATE)
+ return ENOTSUP;
+ overlap |= FATE;
+ passthru = true;
+ break;
+ }
case RTE_FLOW_ACTION_TYPE_VOID:
continue;
default:
@@ -1171,6 +1188,9 @@ enic_copy_action_v2(struct enic *enic,
break;
}
}
+ /* Only PASSTHRU + MARK is allowed */
+ if (passthru && !(overlap & MARK))
+ return ENOTSUP;
if (!(overlap & FATE))
return ENOTSUP;
enic_action->type = FILTER_ACTION_V2;
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 08/13] net/enic: move arguments into struct
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (6 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 07/13] net/enic: enable limited PASSTHRU " Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 09/13] net/enic: enable limited support for RAW flow item Hyong Youb Kim
` (5 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim
There are many copy_item functions, all with the same arguments, which
makes it difficult to add/change arguments. Move the arguments into a
struct to help subsequent commits that will add/fix features. Also
remove self-explanatory verbose comments for these local functions.
These changes are purely mechanical and have no impact on
functionalities.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
drivers/net/enic/enic_flow.c | 209 ++++++++++++++-----------------------------
1 file changed, 67 insertions(+), 142 deletions(-)
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index c6ed9e1b9..fda641b6f 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -23,11 +23,27 @@
rte_log(RTE_LOG_ ## level, enicpmd_logtype_flow, \
fmt "\n", ##args)
+/*
+ * Common arguments passed to copy_item functions. Use this structure
+ * so we can easily add new arguments.
+ * item: Item specification.
+ * filter: Partially filled in NIC filter structure.
+ * inner_ofst: If zero, this is an outer header. If non-zero, this is
+ * the offset into L5 where the header begins.
+ */
+struct copy_item_args {
+ const struct rte_flow_item *item;
+ struct filter_v2 *filter;
+ uint8_t *inner_ofst;
+};
+
+/* functions for copying items into enic filters */
+typedef int (enic_copy_item_fn)(struct copy_item_args *arg);
+
/** Info about how to copy items into enic filters. */
struct enic_items {
/** Function for copying and validating an item. */
- int (*copy_item)(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst);
+ enic_copy_item_fn *copy_item;
/** List of valid previous items. */
const enum rte_flow_item_type * const prev_items;
/** True if it's OK for this item to be the first item. For some NIC
@@ -49,10 +65,6 @@ typedef int (copy_action_fn)(struct enic *enic,
const struct rte_flow_action actions[],
struct filter_action_v2 *enic_action);
-/* functions for copying items into enic filters */
-typedef int(enic_copy_item_fn)(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst);
-
/** Action capabilities for various NICs. */
struct enic_action_cap {
/** list of valid actions */
@@ -340,20 +352,12 @@ mask_exact_match(const u8 *supported, const u8 *supplied,
return 1;
}
-/**
- * Copy IPv4 item into version 1 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Should always be 0 for version 1.
- */
static int
-enic_copy_item_ipv4_v1(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_ipv4_v1(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_ipv4 *spec = item->spec;
const struct rte_flow_item_ipv4 *mask = item->mask;
struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4;
@@ -390,20 +394,12 @@ enic_copy_item_ipv4_v1(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy UDP item into version 1 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Should always be 0 for version 1.
- */
static int
-enic_copy_item_udp_v1(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_udp_v1(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_udp *spec = item->spec;
const struct rte_flow_item_udp *mask = item->mask;
struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4;
@@ -441,20 +437,12 @@ enic_copy_item_udp_v1(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy TCP item into version 1 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Should always be 0 for version 1.
- */
static int
-enic_copy_item_tcp_v1(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_tcp_v1(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_tcp *spec = item->spec;
const struct rte_flow_item_tcp *mask = item->mask;
struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4;
@@ -492,21 +480,12 @@ enic_copy_item_tcp_v1(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy ETH item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * If zero, this is an outer header. If non-zero, this is the offset into L5
- * where the header begins.
- */
static int
-enic_copy_item_eth_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_eth_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
struct ether_hdr enic_spec;
struct ether_hdr enic_mask;
const struct rte_flow_item_eth *spec = item->spec;
@@ -555,21 +534,12 @@ enic_copy_item_eth_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy VLAN item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * If zero, this is an outer header. If non-zero, this is the offset into L5
- * where the header begins.
- */
static int
-enic_copy_item_vlan_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_vlan_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_vlan *spec = item->spec;
const struct rte_flow_item_vlan *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -612,20 +582,12 @@ enic_copy_item_vlan_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy IPv4 item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Must be 0. Don't support inner IPv4 filtering.
- */
static int
-enic_copy_item_ipv4_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_ipv4_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_ipv4 *spec = item->spec;
const struct rte_flow_item_ipv4 *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -662,20 +624,12 @@ enic_copy_item_ipv4_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy IPv6 item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Must be 0. Don't support inner IPv6 filtering.
- */
static int
-enic_copy_item_ipv6_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_ipv6_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_ipv6 *spec = item->spec;
const struct rte_flow_item_ipv6 *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -712,20 +666,12 @@ enic_copy_item_ipv6_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy UDP item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Must be 0. Don't support inner UDP filtering.
- */
static int
-enic_copy_item_udp_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_udp_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_udp *spec = item->spec;
const struct rte_flow_item_udp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -762,20 +708,12 @@ enic_copy_item_udp_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy TCP item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Must be 0. Don't support inner TCP filtering.
- */
static int
-enic_copy_item_tcp_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_tcp_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_tcp *spec = item->spec;
const struct rte_flow_item_tcp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -812,20 +750,12 @@ enic_copy_item_tcp_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy SCTP item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Must be 0. Don't support inner SCTP filtering.
- */
static int
-enic_copy_item_sctp_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_sctp_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_sctp *spec = item->spec;
const struct rte_flow_item_sctp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -874,20 +804,12 @@ enic_copy_item_sctp_v2(const struct rte_flow_item *item,
return 0;
}
-/**
- * Copy UDP item into version 2 NIC filter.
- *
- * @param item[in]
- * Item specification.
- * @param enic_filter[out]
- * Partially filled in NIC filter structure.
- * @param inner_ofst[in]
- * Must be 0. VxLAN headers always start at the beginning of L5.
- */
static int
-enic_copy_item_vxlan_v2(const struct rte_flow_item *item,
- struct filter_v2 *enic_filter, u8 *inner_ofst)
+enic_copy_item_vxlan_v2(struct copy_item_args *arg)
{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_vxlan *spec = item->spec;
const struct rte_flow_item_vxlan *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -966,13 +888,15 @@ enic_copy_filter(const struct rte_flow_item pattern[],
u8 inner_ofst = 0; /* If encapsulated, ofst into L5 */
enum rte_flow_item_type prev_item;
const struct enic_items *item_info;
-
+ struct copy_item_args args;
u8 is_first_item = 1;
FLOW_TRACE();
prev_item = 0;
+ args.filter = enic_filter;
+ args.inner_ofst = &inner_ofst;
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
/* Get info about how to validate and copy the item. If NULL
* is returned the nic does not support the item.
@@ -993,7 +917,8 @@ enic_copy_filter(const struct rte_flow_item pattern[],
if (!item_stacking_valid(prev_item, item_info, is_first_item))
goto stacking_error;
- ret = item_info->copy_item(item, enic_filter, &inner_ofst);
+ args.item = item;
+ ret = item_info->copy_item(&args);
if (ret)
goto item_not_supported;
prev_item = item->type;
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 09/13] net/enic: enable limited support for RAW flow item
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (7 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 08/13] net/enic: move arguments into struct Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 10/13] net/enic: reset VXLAN port regardless of overlay offload Hyong Youb Kim
` (4 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim
Some apps like VPP use a raw item to match UDP tunnel headers like
VXLAN or GENEVE. The NIC hardware supports such usage via L5 match,
which does pattern match on packet data immediately following the
outer L4 header. Accept raw items for these limited use cases.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
doc/guides/nics/enic.rst | 3 +-
doc/guides/rel_notes/release_19_05.rst | 1 +
drivers/net/enic/enic_flow.c | 65 ++++++++++++++++++++++++++++++++++
3 files changed, 68 insertions(+), 1 deletion(-)
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index 526e58ce9..c1415dc0d 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -255,7 +255,7 @@ Generic Flow API is supported. The baseline support is:
- **1300 and later series VICS with advanced filters enabled**
- Attributes: ingress
- - Items: eth, ipv4, ipv6, udp, tcp, vxlan, inner eth, ipv4, ipv6, udp, tcp
+ - Items: eth, ipv4, ipv6, udp, tcp, vxlan, raw, inner eth, ipv4, ipv6, udp, tcp
- Actions: queue, mark, drop, flag, rss, passthru, and void
- Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
- In total, up to 64 bytes of mask is allowed across all headers
@@ -455,6 +455,7 @@ PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
to supporting MARK + RSS and PASSTHRU + MARK to allow the application to mark
packets and then receive them normally. These require 1400 series VIC adapters
and latest firmware.
+ - RAW items are limited to matching UDP tunnel headers like VXLAN.
- **Statistics**
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index c8c12c47a..e5f5c0e25 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -79,6 +79,7 @@ New Features
* **Updated the enic driver.**
+ * Added limited support for RAW.
* Added limited support for RSS.
* Added limited support for PASSTHRU.
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index fda641b6f..ffc6ce1da 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -77,6 +77,7 @@ struct enic_action_cap {
static enic_copy_item_fn enic_copy_item_ipv4_v1;
static enic_copy_item_fn enic_copy_item_udp_v1;
static enic_copy_item_fn enic_copy_item_tcp_v1;
+static enic_copy_item_fn enic_copy_item_raw_v2;
static enic_copy_item_fn enic_copy_item_eth_v2;
static enic_copy_item_fn enic_copy_item_vlan_v2;
static enic_copy_item_fn enic_copy_item_ipv4_v2;
@@ -123,6 +124,14 @@ static const struct enic_items enic_items_v1[] = {
* that layer 3 must be specified.
*/
static const struct enic_items enic_items_v2[] = {
+ [RTE_FLOW_ITEM_TYPE_RAW] = {
+ .copy_item = enic_copy_item_raw_v2,
+ .valid_start_item = 0,
+ .prev_items = (const enum rte_flow_item_type[]) {
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
[RTE_FLOW_ITEM_TYPE_ETH] = {
.copy_item = enic_copy_item_eth_v2,
.valid_start_item = 1,
@@ -196,6 +205,14 @@ static const struct enic_items enic_items_v2[] = {
/** NICs with Advanced filters enabled */
static const struct enic_items enic_items_v3[] = {
+ [RTE_FLOW_ITEM_TYPE_RAW] = {
+ .copy_item = enic_copy_item_raw_v2,
+ .valid_start_item = 0,
+ .prev_items = (const enum rte_flow_item_type[]) {
+ RTE_FLOW_ITEM_TYPE_UDP,
+ RTE_FLOW_ITEM_TYPE_END,
+ },
+ },
[RTE_FLOW_ITEM_TYPE_ETH] = {
.copy_item = enic_copy_item_eth_v2,
.valid_start_item = 1,
@@ -835,6 +852,54 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg)
return 0;
}
+/*
+ * Copy raw item into version 2 NIC filter. Currently, raw pattern match is
+ * very limited. It is intended for matching UDP tunnel header (e.g. vxlan
+ * or geneve).
+ */
+static int
+enic_copy_item_raw_v2(struct copy_item_args *arg)
+{
+ const struct rte_flow_item *item = arg->item;
+ struct filter_v2 *enic_filter = arg->filter;
+ uint8_t *inner_ofst = arg->inner_ofst;
+ const struct rte_flow_item_raw *spec = item->spec;
+ const struct rte_flow_item_raw *mask = item->mask;
+ struct filter_generic_1 *gp = &enic_filter->u.generic_1;
+
+ FLOW_TRACE();
+
+ /* Cannot be used for inner packet */
+ if (*inner_ofst)
+ return EINVAL;
+ /* Need both spec and mask */
+ if (!spec || !mask)
+ return EINVAL;
+ /* Only supports relative with offset 0 */
+ if (!spec->relative || spec->offset != 0 || spec->search || spec->limit)
+ return EINVAL;
+ /* Need non-null pattern that fits within the NIC's filter pattern */
+ if (spec->length == 0 || spec->length > FILTER_GENERIC_1_KEY_LEN ||
+ !spec->pattern || !mask->pattern)
+ return EINVAL;
+ /*
+ * Mask fields, including length, are often set to zero. Assume that
+ * means "same as spec" to avoid breaking existing apps. If length
+ * is not zero, then it should be >= spec length.
+ *
+ * No more pattern follows this, so append to the L4 layer instead of
+ * L5 to work with both recent and older VICs.
+ */
+ if (mask->length != 0 && mask->length < spec->length)
+ return EINVAL;
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].mask + sizeof(struct udp_hdr),
+ mask->pattern, spec->length);
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].val + sizeof(struct udp_hdr),
+ spec->pattern, spec->length);
+
+ return 0;
+}
+
/**
* Return 1 if current item is valid on top of the previous one.
*
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 10/13] net/enic: reset VXLAN port regardless of overlay offload
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (8 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 09/13] net/enic: enable limited support for RAW flow item Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 11/13] net/enic: fix a couple issues with VXLAN match Hyong Youb Kim
` (3 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
Currently, the driver resets the vxlan port register only if overlay
offload is enabled. But, the register is actually tied to hardware
vxlan parsing, which is an independent feature and is always enabled
even if overlay offload is disabled. If left uninitialized, it can
affect flow rules that match vxlan. So always reset the port number
when HW vxlan parsing is available.
Fixes: 8a4efd17410c ("net/enic: add handlers to add/delete vxlan port number")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
drivers/net/enic/enic_main.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enic/enic_main.c b/drivers/net/enic/enic_main.c
index 2652949a2..ea9eb2edf 100644
--- a/drivers/net/enic/enic_main.c
+++ b/drivers/net/enic/enic_main.c
@@ -1714,8 +1714,15 @@ static int enic_dev_init(struct enic *enic)
PKT_TX_OUTER_IP_CKSUM |
PKT_TX_TUNNEL_MASK;
enic->overlay_offload = true;
- enic->vxlan_port = ENIC_DEFAULT_VXLAN_PORT;
dev_info(enic, "Overlay offload is enabled\n");
+ }
+ /*
+ * Reset the vxlan port if HW vxlan parsing is available. It
+ * is always enabled regardless of overlay offload
+ * enable/disable.
+ */
+ if (enic->vxlan) {
+ enic->vxlan_port = ENIC_DEFAULT_VXLAN_PORT;
/*
* Reset the vxlan port to the default, as the NIC firmware
* does not reset it automatically and keeps the old setting.
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 11/13] net/enic: fix a couple issues with VXLAN match
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (9 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 10/13] net/enic: reset VXLAN port regardless of overlay offload Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 12/13] net/enic: fix an endian bug in VLAN match Hyong Youb Kim
` (2 subsequent siblings)
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
The filter API does not have flags for "match VXLAN". Explicitly set
the UDP destination port and mask in the L4 pattern. Otherwise, UDP
packets with non-VXLAN ports may be falsely reported as VXLAN.
1400 series VIC adapters have hardware VXLAN parsing. The L5 buffer on
the NIC starts with the inner Ethernet header, and the VXLAN header is
now in the L4 buffer following the UDP header. So the VXLAN spec/mask
needs to be in the L4 pattern, not L5. Older models still expect the
VXLAN spec/mask in the L5 pattern. Fix up the L4/L5 patterns
accordingly.
Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
drivers/net/enic/enic_flow.c | 46 +++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 45 insertions(+), 1 deletion(-)
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index ffc6ce1da..da43b31dc 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -830,12 +830,23 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg)
const struct rte_flow_item_vxlan *spec = item->spec;
const struct rte_flow_item_vxlan *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
+ struct udp_hdr *udp;
FLOW_TRACE();
if (*inner_ofst)
return EINVAL;
+ /*
+ * The NIC filter API has no flags for "match vxlan". Set UDP port to
+ * avoid false positives.
+ */
+ gp->mask_flags |= FILTER_GENERIC_1_UDP;
+ gp->val_flags |= FILTER_GENERIC_1_UDP;
+ udp = (struct udp_hdr *)gp->layer[FILTER_GENERIC_1_L4].mask;
+ udp->dst_port = 0xffff;
+ udp = (struct udp_hdr *)gp->layer[FILTER_GENERIC_1_L4].val;
+ udp->dst_port = RTE_BE16(4789);
/* Match all if no spec */
if (!spec)
return 0;
@@ -931,6 +942,36 @@ item_stacking_valid(enum rte_flow_item_type prev_item,
return 0;
}
+/*
+ * Fix up the L5 layer.. HW vxlan parsing removes vxlan header from L5.
+ * Instead it is in L4 following the UDP header. Append the vxlan
+ * pattern to L4 (udp) and shift any inner packet pattern in L5.
+ */
+static void
+fixup_l5_layer(struct enic *enic, struct filter_generic_1 *gp,
+ uint8_t inner_ofst)
+{
+ uint8_t layer[FILTER_GENERIC_1_KEY_LEN];
+ uint8_t inner;
+ uint8_t vxlan;
+
+ if (!(inner_ofst > 0 && enic->vxlan))
+ return;
+ FLOW_TRACE();
+ vxlan = sizeof(struct vxlan_hdr);
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].mask + sizeof(struct udp_hdr),
+ gp->layer[FILTER_GENERIC_1_L5].mask, vxlan);
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].val + sizeof(struct udp_hdr),
+ gp->layer[FILTER_GENERIC_1_L5].val, vxlan);
+ inner = inner_ofst - vxlan;
+ memset(layer, 0, sizeof(layer));
+ memcpy(layer, gp->layer[FILTER_GENERIC_1_L5].mask + vxlan, inner);
+ memcpy(gp->layer[FILTER_GENERIC_1_L5].mask, layer, sizeof(layer));
+ memset(layer, 0, sizeof(layer));
+ memcpy(layer, gp->layer[FILTER_GENERIC_1_L5].val + vxlan, inner);
+ memcpy(gp->layer[FILTER_GENERIC_1_L5].val, layer, sizeof(layer));
+}
+
/**
* Build the intenal enic filter structure from the provided pattern. The
* pattern is validated as the items are copied.
@@ -945,6 +986,7 @@ item_stacking_valid(enum rte_flow_item_type prev_item,
static int
enic_copy_filter(const struct rte_flow_item pattern[],
const struct enic_filter_cap *cap,
+ struct enic *enic,
struct filter_v2 *enic_filter,
struct rte_flow_error *error)
{
@@ -989,6 +1031,8 @@ enic_copy_filter(const struct rte_flow_item pattern[],
prev_item = item->type;
is_first_item = 0;
}
+ fixup_l5_layer(enic, &enic_filter->u.generic_1, inner_ofst);
+
return 0;
item_not_supported:
@@ -1481,7 +1525,7 @@ enic_flow_parse(struct rte_eth_dev *dev,
return -rte_errno;
}
enic_filter->type = enic->flow_filter_mode;
- ret = enic_copy_filter(pattern, enic_filter_cap,
+ ret = enic_copy_filter(pattern, enic_filter_cap, enic,
enic_filter, error);
return ret;
}
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 12/13] net/enic: fix an endian bug in VLAN match
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (10 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 11/13] net/enic: fix a couple issues with VXLAN match Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching Hyong Youb Kim
2019-03-04 16:56 ` [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Ferruh Yigit
13 siblings, 0 replies; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim, stable
The VLAN fields in the NIC filter use little endian. The VLAN item is
in big endian, so swap bytes.
Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
doc/guides/nics/enic.rst | 10 ++++++++--
drivers/net/enic/enic_flow.c | 12 ++++++++----
2 files changed, 16 insertions(+), 6 deletions(-)
diff --git a/doc/guides/nics/enic.rst b/doc/guides/nics/enic.rst
index c1415dc0d..d4241ef45 100644
--- a/doc/guides/nics/enic.rst
+++ b/doc/guides/nics/enic.rst
@@ -247,7 +247,7 @@ Generic Flow API is supported. The baseline support is:
in the pattern.
- Attributes: ingress
- - Items: eth, ipv4, ipv6, udp, tcp, vxlan, inner eth, ipv4, ipv6, udp, tcp
+ - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, inner eth, vlan, ipv4, ipv6, udp, tcp
- Actions: queue and void
- Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
- In total, up to 64 bytes of mask is allowed across all headers
@@ -255,7 +255,7 @@ Generic Flow API is supported. The baseline support is:
- **1300 and later series VICS with advanced filters enabled**
- Attributes: ingress
- - Items: eth, ipv4, ipv6, udp, tcp, vxlan, raw, inner eth, ipv4, ipv6, udp, tcp
+ - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, raw, inner eth, vlan, ipv4, ipv6, udp, tcp
- Actions: queue, mark, drop, flag, rss, passthru, and void
- Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
- In total, up to 64 bytes of mask is allowed across all headers
@@ -266,6 +266,12 @@ Generic Flow API is supported. The baseline support is:
- Action: count
+The VIC performs packet matching after applying VLAN strip. If VLAN
+stripping is enabled, EtherType in the ETH item corresponds to the
+stripped VLAN header's EtherType. Stripping does not affect the VLAN
+item. TCI and EtherType in the VLAN item are matched against those in
+the (stripped) VLAN header whether stripping is enabled or disabled.
+
More features may be added in future firmware and new versions of the VIC.
Please refer to the release notes.
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index da43b31dc..b3172e7be 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -579,12 +579,16 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
/* Outer TPID cannot be matched */
if (eth_mask->ether_type)
return ENOTSUP;
+ /*
+ * When packet matching, the VIC always compares vlan-stripped
+ * L2, regardless of vlan stripping settings. So, the inner type
+ * from vlan becomes the ether type of the eth header.
+ */
eth_mask->ether_type = mask->inner_type;
eth_val->ether_type = spec->inner_type;
-
- /* Outer header. Use the vlan mask/val fields */
- gp->mask_vlan = mask->tci;
- gp->val_vlan = spec->tci;
+ /* For TCI, use the vlan mask/val fields (little endian). */
+ gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
+ gp->val_vlan = rte_be_to_cpu_16(spec->tci);
} else {
/* Inner header. Mask/Val start at *inner_ofst into L5 */
if ((*inner_ofst + sizeof(struct vlan_hdr)) >
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (11 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 12/13] net/enic: fix an endian bug in VLAN match Hyong Youb Kim
@ 2019-03-02 10:42 ` Hyong Youb Kim
2019-03-04 16:58 ` Ferruh Yigit
2019-03-04 16:56 ` [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Ferruh Yigit
13 siblings, 1 reply; 18+ messages in thread
From: Hyong Youb Kim @ 2019-03-02 10:42 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, John Daley, Hyong Youb Kim
Inner packet matching is currently buggy in many cases.
1. Mishandling null spec ("match any").
The copy_item functions do nothing if spec is null. This is incorrect,
as all patterns should be appended to the L5 pattern buffer even for
null spec (treated as all zeros).
2. Accessing null spec causing segfault.
3. Not setting protocol fields.
The NIC filter API currently has no flags for "match inner IPv4, IPv6,
UDP, TCP, and so on". So, the driver needs to explicitly set EtherType
and IP protocol fields in the L5 pattern buffer to avoid false
positives (e.g. reporting IPv6 as IPv4).
Instead of keep adding "if inner, do something differently" cases to
the existing copy_item functions, introduce separate functions for
inner packet patterns and address the above issues in those
functions. The changes to the previous outer-packet copy_item
functions are mechanical, due to reduced indentation.
Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
---
doc/guides/rel_notes/release_19_05.rst | 2 +
drivers/net/enic/enic_flow.c | 371 ++++++++++++++++++++-------------
2 files changed, 226 insertions(+), 147 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_05.rst b/doc/guides/rel_notes/release_19_05.rst
index e5f5c0e25..2fc6ad4a9 100644
--- a/doc/guides/rel_notes/release_19_05.rst
+++ b/doc/guides/rel_notes/release_19_05.rst
@@ -79,6 +79,8 @@ New Features
* **Updated the enic driver.**
+ * Fixed several flow (director) bugs related to MARK, SCTP, VLAN, VXLAN, and
+ inner packet matching.
* Added limited support for RAW.
* Added limited support for RSS.
* Added limited support for PASSTHRU.
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index b3172e7be..5924a01e3 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -30,11 +30,15 @@
* filter: Partially filled in NIC filter structure.
* inner_ofst: If zero, this is an outer header. If non-zero, this is
* the offset into L5 where the header begins.
+ * l2_proto_off: offset to EtherType eth or vlan header.
+ * l3_proto_off: offset to next protocol field in IPv4 or 6 header.
*/
struct copy_item_args {
const struct rte_flow_item *item;
struct filter_v2 *filter;
uint8_t *inner_ofst;
+ uint8_t l2_proto_off;
+ uint8_t l3_proto_off;
};
/* functions for copying items into enic filters */
@@ -50,6 +54,8 @@ struct enic_items {
* versions, it's invalid to start the stack above layer 3.
*/
const u8 valid_start_item;
+ /* Inner packet version of copy_item. */
+ enic_copy_item_fn *inner_copy_item;
};
/** Filtering capabilities for various NIC and firmware versions. */
@@ -86,6 +92,12 @@ static enic_copy_item_fn enic_copy_item_udp_v2;
static enic_copy_item_fn enic_copy_item_tcp_v2;
static enic_copy_item_fn enic_copy_item_sctp_v2;
static enic_copy_item_fn enic_copy_item_vxlan_v2;
+static enic_copy_item_fn enic_copy_item_inner_eth_v2;
+static enic_copy_item_fn enic_copy_item_inner_vlan_v2;
+static enic_copy_item_fn enic_copy_item_inner_ipv4_v2;
+static enic_copy_item_fn enic_copy_item_inner_ipv6_v2;
+static enic_copy_item_fn enic_copy_item_inner_udp_v2;
+static enic_copy_item_fn enic_copy_item_inner_tcp_v2;
static copy_action_fn enic_copy_action_v1;
static copy_action_fn enic_copy_action_v2;
@@ -100,6 +112,7 @@ static const struct enic_items enic_items_v1[] = {
.prev_items = (const enum rte_flow_item_type[]) {
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
.copy_item = enic_copy_item_udp_v1,
@@ -108,6 +121,7 @@ static const struct enic_items enic_items_v1[] = {
RTE_FLOW_ITEM_TYPE_IPV4,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
[RTE_FLOW_ITEM_TYPE_TCP] = {
.copy_item = enic_copy_item_tcp_v1,
@@ -116,6 +130,7 @@ static const struct enic_items enic_items_v1[] = {
RTE_FLOW_ITEM_TYPE_IPV4,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
};
@@ -131,6 +146,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_UDP,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.copy_item = enic_copy_item_eth_v2,
@@ -139,6 +155,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_VXLAN,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_eth_v2,
},
[RTE_FLOW_ITEM_TYPE_VLAN] = {
.copy_item = enic_copy_item_vlan_v2,
@@ -147,6 +164,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_vlan_v2,
},
[RTE_FLOW_ITEM_TYPE_IPV4] = {
.copy_item = enic_copy_item_ipv4_v2,
@@ -156,6 +174,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_VLAN,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_ipv4_v2,
},
[RTE_FLOW_ITEM_TYPE_IPV6] = {
.copy_item = enic_copy_item_ipv6_v2,
@@ -165,6 +184,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_VLAN,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_ipv6_v2,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
.copy_item = enic_copy_item_udp_v2,
@@ -174,6 +194,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_IPV6,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_udp_v2,
},
[RTE_FLOW_ITEM_TYPE_TCP] = {
.copy_item = enic_copy_item_tcp_v2,
@@ -183,6 +204,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_IPV6,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_tcp_v2,
},
[RTE_FLOW_ITEM_TYPE_SCTP] = {
.copy_item = enic_copy_item_sctp_v2,
@@ -192,6 +214,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_IPV6,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
[RTE_FLOW_ITEM_TYPE_VXLAN] = {
.copy_item = enic_copy_item_vxlan_v2,
@@ -200,6 +223,7 @@ static const struct enic_items enic_items_v2[] = {
RTE_FLOW_ITEM_TYPE_UDP,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
};
@@ -212,6 +236,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_UDP,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.copy_item = enic_copy_item_eth_v2,
@@ -220,6 +245,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_VXLAN,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_eth_v2,
},
[RTE_FLOW_ITEM_TYPE_VLAN] = {
.copy_item = enic_copy_item_vlan_v2,
@@ -228,6 +254,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_ETH,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_vlan_v2,
},
[RTE_FLOW_ITEM_TYPE_IPV4] = {
.copy_item = enic_copy_item_ipv4_v2,
@@ -237,6 +264,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_VLAN,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_ipv4_v2,
},
[RTE_FLOW_ITEM_TYPE_IPV6] = {
.copy_item = enic_copy_item_ipv6_v2,
@@ -246,6 +274,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_VLAN,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_ipv6_v2,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
.copy_item = enic_copy_item_udp_v2,
@@ -255,6 +284,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_IPV6,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_udp_v2,
},
[RTE_FLOW_ITEM_TYPE_TCP] = {
.copy_item = enic_copy_item_tcp_v2,
@@ -264,6 +294,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_IPV6,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = enic_copy_item_inner_tcp_v2,
},
[RTE_FLOW_ITEM_TYPE_SCTP] = {
.copy_item = enic_copy_item_sctp_v2,
@@ -273,6 +304,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_IPV6,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
[RTE_FLOW_ITEM_TYPE_VXLAN] = {
.copy_item = enic_copy_item_vxlan_v2,
@@ -281,6 +313,7 @@ static const struct enic_items enic_items_v3[] = {
RTE_FLOW_ITEM_TYPE_UDP,
RTE_FLOW_ITEM_TYPE_END,
},
+ .inner_copy_item = NULL,
},
};
@@ -374,7 +407,6 @@ enic_copy_item_ipv4_v1(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_ipv4 *spec = item->spec;
const struct rte_flow_item_ipv4 *mask = item->mask;
struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4;
@@ -385,9 +417,6 @@ enic_copy_item_ipv4_v1(struct copy_item_args *arg)
FLOW_TRACE();
- if (*inner_ofst)
- return ENOTSUP;
-
if (!mask)
mask = &rte_flow_item_ipv4_mask;
@@ -416,7 +445,6 @@ enic_copy_item_udp_v1(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_udp *spec = item->spec;
const struct rte_flow_item_udp *mask = item->mask;
struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4;
@@ -427,9 +455,6 @@ enic_copy_item_udp_v1(struct copy_item_args *arg)
FLOW_TRACE();
- if (*inner_ofst)
- return ENOTSUP;
-
if (!mask)
mask = &rte_flow_item_udp_mask;
@@ -459,7 +484,6 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_tcp *spec = item->spec;
const struct rte_flow_item_tcp *mask = item->mask;
struct filter_ipv4_5tuple *enic_5tup = &enic_filter->u.ipv4;
@@ -470,9 +494,6 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg)
FLOW_TRACE();
- if (*inner_ofst)
- return ENOTSUP;
-
if (!mask)
mask = &rte_flow_item_tcp_mask;
@@ -497,12 +518,150 @@ enic_copy_item_tcp_v1(struct copy_item_args *arg)
return 0;
}
+/*
+ * The common 'copy' function for all inner packet patterns. Patterns are
+ * first appended to the L5 pattern buffer. Then, since the NIC filter
+ * API has no special support for inner packet matching at the moment,
+ * we set EtherType and IP proto as necessary.
+ */
+static int
+copy_inner_common(struct filter_generic_1 *gp, uint8_t *inner_ofst,
+ const void *val, const void *mask, uint8_t val_size,
+ uint8_t proto_off, uint16_t proto_val, uint8_t proto_size)
+{
+ uint8_t *l5_mask, *l5_val;
+ uint8_t start_off;
+
+ /* No space left in the L5 pattern buffer. */
+ start_off = *inner_ofst;
+ if ((start_off + val_size) > FILTER_GENERIC_1_KEY_LEN)
+ return ENOTSUP;
+ l5_mask = gp->layer[FILTER_GENERIC_1_L5].mask;
+ l5_val = gp->layer[FILTER_GENERIC_1_L5].val;
+ /* Copy the pattern into the L5 buffer. */
+ if (val) {
+ memcpy(l5_mask + start_off, mask, val_size);
+ memcpy(l5_val + start_off, val, val_size);
+ }
+ /* Set the protocol field in the previous header. */
+ if (proto_off) {
+ void *m, *v;
+
+ m = l5_mask + proto_off;
+ v = l5_val + proto_off;
+ if (proto_size == 1) {
+ *(uint8_t *)m = 0xff;
+ *(uint8_t *)v = (uint8_t)proto_val;
+ } else if (proto_size == 2) {
+ *(uint16_t *)m = 0xffff;
+ *(uint16_t *)v = proto_val;
+ }
+ }
+ /* All inner headers land in L5 buffer even if their spec is null. */
+ *inner_ofst += val_size;
+ return 0;
+}
+
+static int
+enic_copy_item_inner_eth_v2(struct copy_item_args *arg)
+{
+ const void *mask = arg->item->mask;
+ uint8_t *off = arg->inner_ofst;
+
+ FLOW_TRACE();
+ if (!mask)
+ mask = &rte_flow_item_eth_mask;
+ arg->l2_proto_off = *off + offsetof(struct ether_hdr, ether_type);
+ return copy_inner_common(&arg->filter->u.generic_1, off,
+ arg->item->spec, mask, sizeof(struct ether_hdr),
+ 0 /* no previous protocol */, 0, 0);
+}
+
+static int
+enic_copy_item_inner_vlan_v2(struct copy_item_args *arg)
+{
+ const void *mask = arg->item->mask;
+ uint8_t *off = arg->inner_ofst;
+ uint8_t eth_type_off;
+
+ FLOW_TRACE();
+ if (!mask)
+ mask = &rte_flow_item_vlan_mask;
+ /* Append vlan header to L5 and set ether type = TPID */
+ eth_type_off = arg->l2_proto_off;
+ arg->l2_proto_off = *off + offsetof(struct vlan_hdr, eth_proto);
+ return copy_inner_common(&arg->filter->u.generic_1, off,
+ arg->item->spec, mask, sizeof(struct vlan_hdr),
+ eth_type_off, rte_cpu_to_be_16(ETHER_TYPE_VLAN), 2);
+}
+
+static int
+enic_copy_item_inner_ipv4_v2(struct copy_item_args *arg)
+{
+ const void *mask = arg->item->mask;
+ uint8_t *off = arg->inner_ofst;
+
+ FLOW_TRACE();
+ if (!mask)
+ mask = &rte_flow_item_ipv4_mask;
+ /* Append ipv4 header to L5 and set ether type = ipv4 */
+ arg->l3_proto_off = *off + offsetof(struct ipv4_hdr, next_proto_id);
+ return copy_inner_common(&arg->filter->u.generic_1, off,
+ arg->item->spec, mask, sizeof(struct ipv4_hdr),
+ arg->l2_proto_off, rte_cpu_to_be_16(ETHER_TYPE_IPv4), 2);
+}
+
+static int
+enic_copy_item_inner_ipv6_v2(struct copy_item_args *arg)
+{
+ const void *mask = arg->item->mask;
+ uint8_t *off = arg->inner_ofst;
+
+ FLOW_TRACE();
+ if (!mask)
+ mask = &rte_flow_item_ipv6_mask;
+ /* Append ipv6 header to L5 and set ether type = ipv6 */
+ arg->l3_proto_off = *off + offsetof(struct ipv6_hdr, proto);
+ return copy_inner_common(&arg->filter->u.generic_1, off,
+ arg->item->spec, mask, sizeof(struct ipv6_hdr),
+ arg->l2_proto_off, rte_cpu_to_be_16(ETHER_TYPE_IPv6), 2);
+}
+
+static int
+enic_copy_item_inner_udp_v2(struct copy_item_args *arg)
+{
+ const void *mask = arg->item->mask;
+ uint8_t *off = arg->inner_ofst;
+
+ FLOW_TRACE();
+ if (!mask)
+ mask = &rte_flow_item_udp_mask;
+ /* Append udp header to L5 and set ip proto = udp */
+ return copy_inner_common(&arg->filter->u.generic_1, off,
+ arg->item->spec, mask, sizeof(struct udp_hdr),
+ arg->l3_proto_off, IPPROTO_UDP, 1);
+}
+
+static int
+enic_copy_item_inner_tcp_v2(struct copy_item_args *arg)
+{
+ const void *mask = arg->item->mask;
+ uint8_t *off = arg->inner_ofst;
+
+ FLOW_TRACE();
+ if (!mask)
+ mask = &rte_flow_item_tcp_mask;
+ /* Append tcp header to L5 and set ip proto = tcp */
+ return copy_inner_common(&arg->filter->u.generic_1, off,
+ arg->item->spec, mask, sizeof(struct tcp_hdr),
+ arg->l3_proto_off, IPPROTO_TCP, 1);
+}
+
static int
enic_copy_item_eth_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
struct ether_hdr enic_spec;
struct ether_hdr enic_mask;
const struct rte_flow_item_eth *spec = item->spec;
@@ -530,24 +689,11 @@ enic_copy_item_eth_v2(struct copy_item_args *arg)
enic_spec.ether_type = spec->type;
enic_mask.ether_type = mask->type;
- if (*inner_ofst == 0) {
- /* outer header */
- memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
- sizeof(struct ether_hdr));
- memcpy(gp->layer[FILTER_GENERIC_1_L2].val, &enic_spec,
- sizeof(struct ether_hdr));
- } else {
- /* inner header */
- if ((*inner_ofst + sizeof(struct ether_hdr)) >
- FILTER_GENERIC_1_KEY_LEN)
- return ENOTSUP;
- /* Offset into L5 where inner Ethernet header goes */
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst],
- &enic_mask, sizeof(struct ether_hdr));
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst],
- &enic_spec, sizeof(struct ether_hdr));
- *inner_ofst += sizeof(struct ether_hdr);
- }
+ /* outer header */
+ memcpy(gp->layer[FILTER_GENERIC_1_L2].mask, &enic_mask,
+ sizeof(struct ether_hdr));
+ memcpy(gp->layer[FILTER_GENERIC_1_L2].val, &enic_spec,
+ sizeof(struct ether_hdr));
return 0;
}
@@ -556,10 +702,11 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_vlan *spec = item->spec;
const struct rte_flow_item_vlan *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
+ struct ether_hdr *eth_mask;
+ struct ether_hdr *eth_val;
FLOW_TRACE();
@@ -570,36 +717,21 @@ enic_copy_item_vlan_v2(struct copy_item_args *arg)
if (!mask)
mask = &rte_flow_item_vlan_mask;
- if (*inner_ofst == 0) {
- struct ether_hdr *eth_mask =
- (void *)gp->layer[FILTER_GENERIC_1_L2].mask;
- struct ether_hdr *eth_val =
- (void *)gp->layer[FILTER_GENERIC_1_L2].val;
-
- /* Outer TPID cannot be matched */
- if (eth_mask->ether_type)
- return ENOTSUP;
- /*
- * When packet matching, the VIC always compares vlan-stripped
- * L2, regardless of vlan stripping settings. So, the inner type
- * from vlan becomes the ether type of the eth header.
- */
- eth_mask->ether_type = mask->inner_type;
- eth_val->ether_type = spec->inner_type;
- /* For TCI, use the vlan mask/val fields (little endian). */
- gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
- gp->val_vlan = rte_be_to_cpu_16(spec->tci);
- } else {
- /* Inner header. Mask/Val start at *inner_ofst into L5 */
- if ((*inner_ofst + sizeof(struct vlan_hdr)) >
- FILTER_GENERIC_1_KEY_LEN)
- return ENOTSUP;
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst],
- mask, sizeof(struct vlan_hdr));
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst],
- spec, sizeof(struct vlan_hdr));
- *inner_ofst += sizeof(struct vlan_hdr);
- }
+ eth_mask = (void *)gp->layer[FILTER_GENERIC_1_L2].mask;
+ eth_val = (void *)gp->layer[FILTER_GENERIC_1_L2].val;
+ /* Outer TPID cannot be matched */
+ if (eth_mask->ether_type)
+ return ENOTSUP;
+ /*
+ * When packet matching, the VIC always compares vlan-stripped
+ * L2, regardless of vlan stripping settings. So, the inner type
+ * from vlan becomes the ether type of the eth header.
+ */
+ eth_mask->ether_type = mask->inner_type;
+ eth_val->ether_type = spec->inner_type;
+ /* For TCI, use the vlan mask/val fields (little endian). */
+ gp->mask_vlan = rte_be_to_cpu_16(mask->tci);
+ gp->val_vlan = rte_be_to_cpu_16(spec->tci);
return 0;
}
@@ -608,40 +740,27 @@ enic_copy_item_ipv4_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_ipv4 *spec = item->spec;
const struct rte_flow_item_ipv4 *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
FLOW_TRACE();
- if (*inner_ofst == 0) {
- /* Match IPv4 */
- gp->mask_flags |= FILTER_GENERIC_1_IPV4;
- gp->val_flags |= FILTER_GENERIC_1_IPV4;
+ /* Match IPv4 */
+ gp->mask_flags |= FILTER_GENERIC_1_IPV4;
+ gp->val_flags |= FILTER_GENERIC_1_IPV4;
- /* Match all if no spec */
- if (!spec)
- return 0;
+ /* Match all if no spec */
+ if (!spec)
+ return 0;
- if (!mask)
- mask = &rte_flow_item_ipv4_mask;
+ if (!mask)
+ mask = &rte_flow_item_ipv4_mask;
- memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr,
- sizeof(struct ipv4_hdr));
- memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr,
- sizeof(struct ipv4_hdr));
- } else {
- /* Inner IPv4 header. Mask/Val start at *inner_ofst into L5 */
- if ((*inner_ofst + sizeof(struct ipv4_hdr)) >
- FILTER_GENERIC_1_KEY_LEN)
- return ENOTSUP;
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst],
- mask, sizeof(struct ipv4_hdr));
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst],
- spec, sizeof(struct ipv4_hdr));
- *inner_ofst += sizeof(struct ipv4_hdr);
- }
+ memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr,
+ sizeof(struct ipv4_hdr));
+ memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr,
+ sizeof(struct ipv4_hdr));
return 0;
}
@@ -650,7 +769,6 @@ enic_copy_item_ipv6_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_ipv6 *spec = item->spec;
const struct rte_flow_item_ipv6 *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -668,22 +786,10 @@ enic_copy_item_ipv6_v2(struct copy_item_args *arg)
if (!mask)
mask = &rte_flow_item_ipv6_mask;
- if (*inner_ofst == 0) {
- memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr,
- sizeof(struct ipv6_hdr));
- memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr,
- sizeof(struct ipv6_hdr));
- } else {
- /* Inner IPv6 header. Mask/Val start at *inner_ofst into L5 */
- if ((*inner_ofst + sizeof(struct ipv6_hdr)) >
- FILTER_GENERIC_1_KEY_LEN)
- return ENOTSUP;
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst],
- mask, sizeof(struct ipv6_hdr));
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst],
- spec, sizeof(struct ipv6_hdr));
- *inner_ofst += sizeof(struct ipv6_hdr);
- }
+ memcpy(gp->layer[FILTER_GENERIC_1_L3].mask, &mask->hdr,
+ sizeof(struct ipv6_hdr));
+ memcpy(gp->layer[FILTER_GENERIC_1_L3].val, &spec->hdr,
+ sizeof(struct ipv6_hdr));
return 0;
}
@@ -692,7 +798,6 @@ enic_copy_item_udp_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_udp *spec = item->spec;
const struct rte_flow_item_udp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -710,22 +815,10 @@ enic_copy_item_udp_v2(struct copy_item_args *arg)
if (!mask)
mask = &rte_flow_item_udp_mask;
- if (*inner_ofst == 0) {
- memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr,
- sizeof(struct udp_hdr));
- memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr,
- sizeof(struct udp_hdr));
- } else {
- /* Inner IPv6 header. Mask/Val start at *inner_ofst into L5 */
- if ((*inner_ofst + sizeof(struct udp_hdr)) >
- FILTER_GENERIC_1_KEY_LEN)
- return ENOTSUP;
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst],
- mask, sizeof(struct udp_hdr));
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst],
- spec, sizeof(struct udp_hdr));
- *inner_ofst += sizeof(struct udp_hdr);
- }
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr,
+ sizeof(struct udp_hdr));
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr,
+ sizeof(struct udp_hdr));
return 0;
}
@@ -734,7 +827,6 @@ enic_copy_item_tcp_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_tcp *spec = item->spec;
const struct rte_flow_item_tcp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -752,22 +844,10 @@ enic_copy_item_tcp_v2(struct copy_item_args *arg)
if (!mask)
return ENOTSUP;
- if (*inner_ofst == 0) {
- memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr,
- sizeof(struct tcp_hdr));
- memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr,
- sizeof(struct tcp_hdr));
- } else {
- /* Inner IPv6 header. Mask/Val start at *inner_ofst into L5 */
- if ((*inner_ofst + sizeof(struct tcp_hdr)) >
- FILTER_GENERIC_1_KEY_LEN)
- return ENOTSUP;
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].mask[*inner_ofst],
- mask, sizeof(struct tcp_hdr));
- memcpy(&gp->layer[FILTER_GENERIC_1_L5].val[*inner_ofst],
- spec, sizeof(struct tcp_hdr));
- *inner_ofst += sizeof(struct tcp_hdr);
- }
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].mask, &mask->hdr,
+ sizeof(struct tcp_hdr));
+ memcpy(gp->layer[FILTER_GENERIC_1_L4].val, &spec->hdr,
+ sizeof(struct tcp_hdr));
return 0;
}
@@ -776,7 +856,6 @@ enic_copy_item_sctp_v2(struct copy_item_args *arg)
{
const struct rte_flow_item *item = arg->item;
struct filter_v2 *enic_filter = arg->filter;
- uint8_t *inner_ofst = arg->inner_ofst;
const struct rte_flow_item_sctp *spec = item->spec;
const struct rte_flow_item_sctp *mask = item->mask;
struct filter_generic_1 *gp = &enic_filter->u.generic_1;
@@ -785,9 +864,6 @@ enic_copy_item_sctp_v2(struct copy_item_args *arg)
FLOW_TRACE();
- if (*inner_ofst)
- return ENOTSUP;
-
/*
* The NIC filter API has no flags for "match sctp", so explicitly set
* the protocol number in the IP pattern.
@@ -838,9 +914,6 @@ enic_copy_item_vxlan_v2(struct copy_item_args *arg)
FLOW_TRACE();
- if (*inner_ofst)
- return EINVAL;
-
/*
* The NIC filter API has no flags for "match vxlan". Set UDP port to
* avoid false positives.
@@ -1000,6 +1073,7 @@ enic_copy_filter(const struct rte_flow_item pattern[],
enum rte_flow_item_type prev_item;
const struct enic_items *item_info;
struct copy_item_args args;
+ enic_copy_item_fn *copy_fn;
u8 is_first_item = 1;
FLOW_TRACE();
@@ -1017,7 +1091,8 @@ enic_copy_filter(const struct rte_flow_item pattern[],
item_info = &cap->item_info[item->type];
if (item->type > cap->max_item_type ||
- item_info->copy_item == NULL) {
+ item_info->copy_item == NULL ||
+ (inner_ofst > 0 && item_info->inner_copy_item == NULL)) {
rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL, "Unsupported item.");
@@ -1029,7 +1104,9 @@ enic_copy_filter(const struct rte_flow_item pattern[],
goto stacking_error;
args.item = item;
- ret = item_info->copy_item(&args);
+ copy_fn = inner_ofst > 0 ? item_info->inner_copy_item :
+ item_info->copy_item;
+ ret = copy_fn(&args);
if (ret)
goto item_not_supported;
prev_item = item->type;
--
2.16.2
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching Hyong Youb Kim
@ 2019-03-04 16:58 ` Ferruh Yigit
2019-04-10 17:06 ` Kevin Traynor
0 siblings, 1 reply; 18+ messages in thread
From: Ferruh Yigit @ 2019-03-04 16:58 UTC (permalink / raw)
To: Hyong Youb Kim; +Cc: dev, John Daley
On 3/2/2019 10:42 AM, Hyong Youb Kim wrote:
> Inner packet matching is currently buggy in many cases.
>
> 1. Mishandling null spec ("match any").
> The copy_item functions do nothing if spec is null. This is incorrect,
> as all patterns should be appended to the L5 pattern buffer even for
> null spec (treated as all zeros).
>
> 2. Accessing null spec causing segfault.
>
> 3. Not setting protocol fields.
> The NIC filter API currently has no flags for "match inner IPv4, IPv6,
> UDP, TCP, and so on". So, the driver needs to explicitly set EtherType
> and IP protocol fields in the L5 pattern buffer to avoid false
> positives (e.g. reporting IPv6 as IPv4).
>
> Instead of keep adding "if inner, do something differently" cases to
> the existing copy_item functions, introduce separate functions for
> inner packet patterns and address the above issues in those
> functions. The changes to the previous outer-packet copy_item
> functions are mechanical, due to reduced indentation.
>
> Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
>
> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
<...>
I have added "Cc: stable@dpdk.org" tag while merging. If the tag explicitly left
out to prevent backport please let me know to remove the tag back.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching
2019-03-04 16:58 ` Ferruh Yigit
@ 2019-04-10 17:06 ` Kevin Traynor
2019-04-10 17:06 ` Kevin Traynor
0 siblings, 1 reply; 18+ messages in thread
From: Kevin Traynor @ 2019-04-10 17:06 UTC (permalink / raw)
To: Ferruh Yigit, Hyong Youb Kim; +Cc: dev, John Daley, stable, Yongseok Koh
On 04/03/2019 16:58, Ferruh Yigit wrote:
> On 3/2/2019 10:42 AM, Hyong Youb Kim wrote:
>> Inner packet matching is currently buggy in many cases.
>>
>> 1. Mishandling null spec ("match any").
>> The copy_item functions do nothing if spec is null. This is incorrect,
>> as all patterns should be appended to the L5 pattern buffer even for
>> null spec (treated as all zeros).
>>
>> 2. Accessing null spec causing segfault.
>>
>> 3. Not setting protocol fields.
>> The NIC filter API currently has no flags for "match inner IPv4, IPv6,
>> UDP, TCP, and so on". So, the driver needs to explicitly set EtherType
>> and IP protocol fields in the L5 pattern buffer to avoid false
>> positives (e.g. reporting IPv6 as IPv4).
>>
>> Instead of keep adding "if inner, do something differently" cases to
>> the existing copy_item functions, introduce separate functions for
>> inner packet patterns and address the above issues in those
>> functions. The changes to the previous outer-packet copy_item
>> functions are mechanical, due to reduced indentation.
>>
>> Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
>>
>> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
>
> <...>
>
> I have added "Cc: stable@dpdk.org" tag while merging. If the tag explicitly left
> out to prevent backport please let me know to remove the tag back.
>
Hi Hyong,
I can't apply this patch with confidence on the 18.11 LTS branch due to
the amount of changes and conflicts. The other relevant patches in the
series for 18.11 were ok.
If you want the fixes from this patch on stable branches, can you please
send a backport for them
(http://doc.dpdk.org/guides/contributing/patches.html?highlight=stable#backporting-patches-for-stable-releases).
Otherwise, please let us know that you don't want them on stable branches.
thanks,
Kevin.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching
2019-04-10 17:06 ` Kevin Traynor
@ 2019-04-10 17:06 ` Kevin Traynor
0 siblings, 0 replies; 18+ messages in thread
From: Kevin Traynor @ 2019-04-10 17:06 UTC (permalink / raw)
To: Ferruh Yigit, Hyong Youb Kim; +Cc: dev, John Daley, stable, Yongseok Koh
On 04/03/2019 16:58, Ferruh Yigit wrote:
> On 3/2/2019 10:42 AM, Hyong Youb Kim wrote:
>> Inner packet matching is currently buggy in many cases.
>>
>> 1. Mishandling null spec ("match any").
>> The copy_item functions do nothing if spec is null. This is incorrect,
>> as all patterns should be appended to the L5 pattern buffer even for
>> null spec (treated as all zeros).
>>
>> 2. Accessing null spec causing segfault.
>>
>> 3. Not setting protocol fields.
>> The NIC filter API currently has no flags for "match inner IPv4, IPv6,
>> UDP, TCP, and so on". So, the driver needs to explicitly set EtherType
>> and IP protocol fields in the L5 pattern buffer to avoid false
>> positives (e.g. reporting IPv6 as IPv4).
>>
>> Instead of keep adding "if inner, do something differently" cases to
>> the existing copy_item functions, introduce separate functions for
>> inner packet patterns and address the above issues in those
>> functions. The changes to the previous outer-packet copy_item
>> functions are mechanical, due to reduced indentation.
>>
>> Fixes: 6ced137607d0 ("net/enic: flow API for NICs with advanced filters enabled")
>>
>> Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
>
> <...>
>
> I have added "Cc: stable@dpdk.org" tag while merging. If the tag explicitly left
> out to prevent backport please let me know to remove the tag back.
>
Hi Hyong,
I can't apply this patch with confidence on the 18.11 LTS branch due to
the amount of changes and conflicts. The other relevant patches in the
series for 18.11 were ok.
If you want the fixes from this patch on stable branches, can you please
send a backport for them
(http://doc.dpdk.org/guides/contributing/patches.html?highlight=stable#backporting-patches-for-stable-releases).
Otherwise, please let us know that you don't want them on stable branches.
thanks,
Kevin.
^ permalink raw reply [flat|nested] 18+ messages in thread
* Re: [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates
2019-03-02 10:42 [dpdk-dev] [PATCH v2 00/13] net/enic: 19.05 updates Hyong Youb Kim
` (12 preceding siblings ...)
2019-03-02 10:42 ` [dpdk-dev] [PATCH v2 13/13] net/enic: fix several issues with inner packet matching Hyong Youb Kim
@ 2019-03-04 16:56 ` Ferruh Yigit
13 siblings, 0 replies; 18+ messages in thread
From: Ferruh Yigit @ 2019-03-04 16:56 UTC (permalink / raw)
To: Hyong Youb Kim; +Cc: dev, John Daley
On 3/2/2019 10:42 AM, Hyong Youb Kim wrote:
> This patch series fixes bugs in enic's implementation of flow API and
> adds very limited support for RAW, RSS, and PASSTHRU. Limited RSS and
> PASSTHRU are intended to support partial offloads in OVS-DPDK and
> VPP. These apps use MARK + default RSS and PASSTHRU + MARK to "mark
> packet and then receive normally". Cisco VIC can support these, even
> though general RSS and PASSTHRU are not possible.
>
> Intentionally removed Cc: stable from the last patch ("net/enic: fix
> several issues with inner packet matching") as it depends on a non-fix
> patch ("net/enic: move arguments into struct"). I will submit backport
> request for these separately, after rc1.
>
> ---
> v2:
> * Merge doc changes with corresponding code changes.
>
> Hyong Youb Kim (13):
> net/enic: remove unused code
> net/enic: fix flow director SCTP matching
> net/enic: fix SCTP match for flow API
> net/enic: allow flow mark ID 0
> net/enic: check for unsupported flow item types
> net/enic: enable limited RSS flow action
> net/enic: enable limited PASSTHRU flow action
> net/enic: move arguments into struct
> net/enic: enable limited support for RAW flow item
> net/enic: reset VXLAN port regardless of overlay offload
> net/enic: fix a couple issues with VXLAN match
> net/enic: fix an endian bug in VLAN match
> net/enic: fix several issues with inner packet matching
Series applied to dpdk-next-net/master, thanks.
^ permalink raw reply [flat|nested] 18+ messages in thread