* [PATCH 00/25] add the extend rte_flow offload support of nfp PMD
@ 2022-10-18 3:22 Chaoyong He
2022-10-18 3:22 ` [PATCH 01/25] net/nfp: add the offload support of IPv4 VXLAN item Chaoyong He
` (26 more replies)
0 siblings, 27 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
This is the third patch series to add the support of rte_flow offload for
nfp PMD, includes:
Add the offload support of decap/encap of VXLAN
Add the offload support of decap/encap of GENEVE
Add the offload support of decap/encap of NVGRE
Depends-on: series-25268 ("add the basic rte_flow offload support of nfp PMD")
Chaoyong He (25):
net/nfp: add the offload support of IPv4 VXLAN item
net/nfp: add the offload support of IPv6 VXLAN item
net/nfp: prepare for the encap action of IPv4 tunnel
net/nfp: prepare for the encap action of IPv6 tunnel
net/nfp: add the offload support of IPv4 VXLAN encap action
net/nfp: add the offload support of IPv6 VXLAN encap action
net/nfp: prepare for the decap action of IPv4 UDP tunnel
net/nfp: prepare for the decap action of IPv6 UDP tunnel
net/nfp: add the offload support of IPv4 VXLAN decap action
net/nfp: add the offload support of IPv6 VXLAN decap action
net/nfp: add the offload support of IPv4 GENEVE encap action
net/nfp: add the offload support of IPv6 GENEVE encap action
net/nfp: add the offload support of IPv4 GENEVE item
net/nfp: add the offload support of IPv6 GENEVE item
net/nfp: add the offload support of IPv4 GENEVE decap action
net/nfp: add the offload support of IPv6 GENEVE decap action
net/nfp: add the offload support of IPv4 NVGRE encap action
net/nfp: add the offload support of IPv6 NVGRE encap action
net/nfp: prepare for the decap action of IPv4 GRE tunnel
net/nfp: prepare for the decap action of IPv6 GRE tunnel
net/nfp: add the offload support of IPv4 NVGRE item
net/nfp: add the offload support of IPv6 NVGRE item
net/nfp: add the offload support of IPv4 NVGRE decap action
net/nfp: add the offload support of IPv6 NVGRE decap action
net/nfp: add the support of new tunnel solution
doc/guides/nics/features/nfp.ini | 10 +
doc/guides/rel_notes/release_22_11.rst | 3 +
drivers/net/nfp/flower/nfp_flower.c | 14 +
drivers/net/nfp/flower/nfp_flower.h | 24 +
drivers/net/nfp/flower/nfp_flower_cmsg.c | 222 ++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 415 +++++++
drivers/net/nfp/nfp_flow.c | 2003 +++++++++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 56 +
8 files changed, 2678 insertions(+), 69 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 01/25] net/nfp: add the offload support of IPv4 VXLAN item
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 02/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (25 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding data structure and logics, to support
the offload of IPv4 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/flower/nfp_flower_cmsg.h | 35 +++++
drivers/net/nfp/nfp_flow.c | 243 ++++++++++++++++++++++++++-----
3 files changed, 246 insertions(+), 33 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 1ad6c2c..40b9a4d 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -36,6 +36,7 @@ sctp = Y
tcp = Y
udp = Y
vlan = Y
+vxlan = Y
[rte_flow actions]
count = Y
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 6bf8ff7..08e2873 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -324,6 +324,41 @@ struct nfp_flower_ipv6 {
uint8_t ipv6_dst[16];
};
+struct nfp_flower_tun_ipv4 {
+ rte_be32_t src;
+ rte_be32_t dst;
+};
+
+struct nfp_flower_tun_ip_ext {
+ uint8_t tos;
+ uint8_t ttl;
+};
+
+/*
+ * Flow Frame IPv4 UDP TUNNEL --> Tunnel details (5W/20B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv4_udp_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ rte_be16_t reserved1;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be32_t reserved2;
+ rte_be32_t tun_id;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 823b020..5f6f800 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -38,7 +38,8 @@ struct nfp_flow_item_proc {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask);
+ bool is_mask,
+ bool is_outer_layer);
/* List of possible subsequent items. */
const enum rte_flow_item_type *const next_item;
};
@@ -503,6 +504,7 @@ struct nfp_mask_id_entry {
struct nfp_fl_key_ls *key_ls)
{
struct rte_eth_dev *ethdev;
+ bool outer_ip4_flag = false;
const struct rte_flow_item *item;
struct nfp_flower_representor *representor;
const struct rte_flow_item_port_id *port_id;
@@ -538,6 +540,8 @@ struct nfp_mask_id_entry {
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV4 detected");
key_ls->key_layer |= NFP_FLOWER_LAYER_IPV4;
key_ls->key_size += sizeof(struct nfp_flower_ipv4);
+ if (!outer_ip4_flag)
+ outer_ip4_flag = true;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected");
@@ -559,6 +563,21 @@ struct nfp_mask_id_entry {
key_ls->key_layer |= NFP_FLOWER_LAYER_TP;
key_ls->key_size += sizeof(struct nfp_flower_tp_ports);
break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_VXLAN;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN;
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -737,12 +756,25 @@ struct nfp_mask_id_entry {
return ret;
}
+static bool
+nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
+{
+ struct nfp_flower_meta_tci *meta_tci;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
+ return true;
+
+ return false;
+}
+
static int
nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_mac_mpls *eth;
const struct rte_flow_item_eth *spec;
@@ -778,7 +810,8 @@ struct nfp_mask_id_entry {
__rte_unused char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_vlan *spec;
@@ -807,41 +840,58 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ bool is_outer_layer)
{
struct nfp_flower_ipv4 *ipv4;
const struct rte_ipv4_hdr *hdr;
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv4 *spec;
const struct rte_flow_item_ipv4 *mask;
+ struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
- if (spec == NULL) {
- PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
- goto ipv4_end;
- }
+ if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
+ return 0;
+ }
- /*
- * reserve space for L4 info.
- * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
- */
- if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
- *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off;
+ ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_udp_tun->ipv4.src = hdr->src_addr;
+ ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ } else {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
+ goto ipv4_end;
+ }
+
+ /*
+ * reserve space for L4 info.
+ * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
+ */
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off;
- ipv4->ip_ext.tos = hdr->type_of_service;
- ipv4->ip_ext.proto = hdr->next_proto_id;
- ipv4->ip_ext.ttl = hdr->time_to_live;
- ipv4->ipv4_src = hdr->src_addr;
- ipv4->ipv4_dst = hdr->dst_addr;
+ ipv4->ip_ext.tos = hdr->type_of_service;
+ ipv4->ip_ext.proto = hdr->next_proto_id;
+ ipv4->ip_ext.ttl = hdr->time_to_live;
+ ipv4->ipv4_src = hdr->src_addr;
+ ipv4->ipv4_dst = hdr->dst_addr;
ipv4_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4);
+ *mbuf_off += sizeof(struct nfp_flower_ipv4);
+ }
return 0;
}
@@ -851,7 +901,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_ipv6 *ipv6;
const struct rte_ipv6_hdr *hdr;
@@ -896,7 +947,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
uint8_t tcp_flags;
struct nfp_flower_tp_ports *ports;
@@ -968,7 +1020,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ bool is_outer_layer)
{
char *ports_off;
struct nfp_flower_tp_ports *ports;
@@ -982,6 +1035,12 @@ struct nfp_mask_id_entry {
return 0;
}
+ /* Don't add L4 info if working on a inner layer pattern */
+ if (!is_outer_layer) {
+ PMD_DRV_LOG(INFO, "Detected inner layer UDP, skipping.");
+ return 0;
+ }
+
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
@@ -1009,7 +1068,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
char *ports_off;
struct nfp_flower_tp_ports *ports;
@@ -1045,10 +1105,42 @@ struct nfp_mask_id_entry {
return 0;
}
+static int
+nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ const struct rte_vxlan_hdr *hdr;
+ struct nfp_flower_ipv4_udp_tun *tun4;
+ const struct rte_flow_item_vxlan *spec;
+ const struct rte_flow_item_vxlan *mask;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge vxlan: no item->spec!");
+ goto vxlan_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = hdr->vx_vni;
+
+vxlan_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ return 0;
+}
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
- .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4),
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
@@ -1131,6 +1223,7 @@ struct nfp_mask_id_entry {
.merge = nfp_flow_merge_tcp,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN),
.mask_support = &(const struct rte_flow_item_udp){
.hdr = {
.src_port = RTE_BE16(0xffff),
@@ -1152,6 +1245,17 @@ struct nfp_mask_id_entry {
.mask_sz = sizeof(struct rte_flow_item_sctp),
.merge = nfp_flow_merge_sctp,
},
+ [RTE_FLOW_ITEM_TYPE_VXLAN] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &(const struct rte_flow_item_vxlan){
+ .hdr = {
+ .vx_vni = RTE_BE32(0xffffff00),
+ },
+ },
+ .mask_default = &rte_flow_item_vxlan_mask,
+ .mask_sz = sizeof(struct rte_flow_item_vxlan),
+ .merge = nfp_flow_merge_vxlan,
+ },
};
static int
@@ -1205,21 +1309,53 @@ struct nfp_mask_id_entry {
return ret;
}
+static bool
+nfp_flow_is_tun_item(const struct rte_flow_item *item)
+{
+ if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+ return true;
+
+ return false;
+}
+
+static bool
+nfp_flow_inner_item_get(const struct rte_flow_item items[],
+ const struct rte_flow_item **inner_item)
+{
+ const struct rte_flow_item *item;
+
+ *inner_item = items;
+
+ for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ if (nfp_flow_is_tun_item(item)) {
+ *inner_item = ++item;
+ return true;
+ }
+ }
+
+ return false;
+}
+
static int
nfp_flow_compile_item_proc(const struct rte_flow_item items[],
struct rte_flow *nfp_flow,
char **mbuf_off_exact,
- char **mbuf_off_mask)
+ char **mbuf_off_mask,
+ bool is_outer_layer)
{
int i;
int ret = 0;
+ bool continue_flag = true;
const struct rte_flow_item *item;
const struct nfp_flow_item_proc *proc_list;
proc_list = nfp_flow_item_proc_list;
- for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) {
const struct nfp_flow_item_proc *proc = NULL;
+ if (nfp_flow_is_tun_item(item))
+ continue_flag = false;
+
for (i = 0; proc_list->next_item && proc_list->next_item[i]; ++i) {
if (proc_list->next_item[i] == item->type) {
proc = &nfp_flow_item_proc_list[item->type];
@@ -1248,14 +1384,14 @@ struct nfp_mask_id_entry {
}
ret = proc->merge(nfp_flow, mbuf_off_exact, item,
- proc, false);
+ proc, false, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type);
break;
}
ret = proc->merge(nfp_flow, mbuf_off_mask, item,
- proc, true);
+ proc, true, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type);
break;
@@ -1275,6 +1411,9 @@ struct nfp_mask_id_entry {
int ret;
char *mbuf_off_mask;
char *mbuf_off_exact;
+ bool is_tun_flow = false;
+ bool is_outer_layer = true;
+ const struct rte_flow_item *loop_item;
mbuf_off_exact = nfp_flow->payload.unmasked_data +
sizeof(struct nfp_flower_meta_tci) +
@@ -1283,14 +1422,29 @@ struct nfp_mask_id_entry {
sizeof(struct nfp_flower_meta_tci) +
sizeof(struct nfp_flower_in_port);
+ /* Check if this is a tunnel flow and get the inner item*/
+ is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
+ if (is_tun_flow)
+ is_outer_layer = false;
+
/* Go over items */
- ret = nfp_flow_compile_item_proc(items, nfp_flow,
- &mbuf_off_exact, &mbuf_off_mask);
+ ret = nfp_flow_compile_item_proc(loop_item, nfp_flow,
+ &mbuf_off_exact, &mbuf_off_mask, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item compile failed.");
return -EINVAL;
}
+ /* Go over inner items */
+ if (is_tun_flow) {
+ ret = nfp_flow_compile_item_proc(items, nfp_flow,
+ &mbuf_off_exact, &mbuf_off_mask, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "nfp flow outer item compile failed.");
+ return -EINVAL;
+ }
+ }
+
return 0;
}
@@ -2123,12 +2277,35 @@ struct nfp_mask_id_entry {
return 0;
}
+static int
+nfp_flow_tunnel_match(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_flow_tunnel *tunnel,
+ __rte_unused struct rte_flow_item **pmd_items,
+ uint32_t *num_of_items,
+ __rte_unused struct rte_flow_error *err)
+{
+ *num_of_items = 0;
+
+ return 0;
+}
+
+static int
+nfp_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_flow_item *pmd_items,
+ __rte_unused uint32_t num_of_items,
+ __rte_unused struct rte_flow_error *err)
+{
+ return 0;
+}
+
static const struct rte_flow_ops nfp_flow_ops = {
.validate = nfp_flow_validate,
.create = nfp_flow_create,
.destroy = nfp_flow_destroy,
.flush = nfp_flow_flush,
.query = nfp_flow_query,
+ .tunnel_match = nfp_flow_tunnel_match,
+ .tunnel_item_release = nfp_flow_tunnel_item_release,
};
int
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 02/25] net/nfp: add the offload support of IPv6 VXLAN item
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
2022-10-18 3:22 ` [PATCH 01/25] net/nfp: add the offload support of IPv4 VXLAN item Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 03/25] net/nfp: prepare for the encap action of IPv4 tunnel Chaoyong He
` (24 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding data structure and logics, to support
the offload of IPv6 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 42 ++++++++++++
drivers/net/nfp/nfp_flow.c | 113 ++++++++++++++++++++++++-------
2 files changed, 129 insertions(+), 26 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 08e2873..996ba3b 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -329,6 +329,11 @@ struct nfp_flower_tun_ipv4 {
rte_be32_t dst;
};
+struct nfp_flower_tun_ipv6 {
+ uint8_t ipv6_src[16];
+ uint8_t ipv6_dst[16];
+};
+
struct nfp_flower_tun_ip_ext {
uint8_t tos;
uint8_t ttl;
@@ -359,6 +364,43 @@ struct nfp_flower_ipv4_udp_tun {
rte_be32_t tun_id;
};
+/*
+ * Flow Frame IPv6 UDP TUNNEL --> Tunnel details (11W/44B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv6_udp_tun {
+ struct nfp_flower_tun_ipv6 ipv6;
+ rte_be16_t reserved1;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be32_t reserved2;
+ rte_be32_t tun_id;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 5f6f800..1673518 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -505,6 +505,7 @@ struct nfp_mask_id_entry {
{
struct rte_eth_dev *ethdev;
bool outer_ip4_flag = false;
+ bool outer_ip6_flag = false;
const struct rte_flow_item *item;
struct nfp_flower_representor *representor;
const struct rte_flow_item_port_id *port_id;
@@ -547,6 +548,8 @@ struct nfp_mask_id_entry {
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected");
key_ls->key_layer |= NFP_FLOWER_LAYER_IPV6;
key_ls->key_size += sizeof(struct nfp_flower_ipv6);
+ if (!outer_ip6_flag)
+ outer_ip6_flag = true;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_TCP detected");
@@ -565,8 +568,9 @@ struct nfp_mask_id_entry {
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_VXLAN;
key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN;
if (outer_ip4_flag) {
@@ -576,6 +580,19 @@ struct nfp_mask_id_entry {
* in `struct nfp_flower_ipv4_udp_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for VXLAN tunnel.");
+ return -EINVAL;
}
break;
default:
@@ -902,42 +919,61 @@ struct nfp_mask_id_entry {
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
bool is_mask,
- __rte_unused bool is_outer_layer)
+ bool is_outer_layer)
{
struct nfp_flower_ipv6 *ipv6;
const struct rte_ipv6_hdr *hdr;
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv6 *spec;
const struct rte_flow_item_ipv6 *mask;
+ struct nfp_flower_ipv6_udp_tun *ipv6_udp_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
- if (spec == NULL) {
- PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
- goto ipv6_end;
- }
+ if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
+ return 0;
+ }
- /*
- * reserve space for L4 info.
- * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
- */
- if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
- *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+
+ ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_src));
+ memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+ } else {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
+ goto ipv6_end;
+ }
- hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv6 = (struct nfp_flower_ipv6 *)*mbuf_off;
+ /*
+ * reserve space for L4 info.
+ * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
+ */
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv6 = (struct nfp_flower_ipv6 *)*mbuf_off;
- ipv6->ip_ext.tos = (hdr->vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
- RTE_IPV6_HDR_TC_SHIFT;
- ipv6->ip_ext.proto = hdr->proto;
- ipv6->ip_ext.ttl = hdr->hop_limits;
- memcpy(ipv6->ipv6_src, hdr->src_addr, sizeof(ipv6->ipv6_src));
- memcpy(ipv6->ipv6_dst, hdr->dst_addr, sizeof(ipv6->ipv6_dst));
+ ipv6->ip_ext.tos = (hdr->vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
+ RTE_IPV6_HDR_TC_SHIFT;
+ ipv6->ip_ext.proto = hdr->proto;
+ ipv6->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6->ipv6_src, hdr->src_addr, sizeof(ipv6->ipv6_src));
+ memcpy(ipv6->ipv6_dst, hdr->dst_addr, sizeof(ipv6->ipv6_dst));
ipv6_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv6);
+ *mbuf_off += sizeof(struct nfp_flower_ipv6);
+ }
return 0;
}
@@ -1106,7 +1142,7 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
+nfp_flow_merge_vxlan(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1115,8 +1151,15 @@ struct nfp_mask_id_entry {
{
const struct rte_vxlan_hdr *hdr;
struct nfp_flower_ipv4_udp_tun *tun4;
+ struct nfp_flower_ipv6_udp_tun *tun6;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_vxlan *spec;
const struct rte_flow_item_vxlan *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1127,11 +1170,21 @@ struct nfp_mask_id_entry {
mask = item->mask ? item->mask : proc->mask_default;
hdr = is_mask ? &mask->hdr : &spec->hdr;
- tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- tun4->tun_id = hdr->vx_vni;
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+ tun6->tun_id = hdr->vx_vni;
+ } else {
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = hdr->vx_vni;
+ }
vxlan_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6))
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
+ else
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
return 0;
}
@@ -1140,7 +1193,8 @@ struct nfp_mask_id_entry {
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4),
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_IPV6),
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
@@ -1413,6 +1467,7 @@ struct nfp_mask_id_entry {
char *mbuf_off_exact;
bool is_tun_flow = false;
bool is_outer_layer = true;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item *loop_item;
mbuf_off_exact = nfp_flow->payload.unmasked_data +
@@ -1422,6 +1477,12 @@ struct nfp_mask_id_entry {
sizeof(struct nfp_flower_meta_tci) +
sizeof(struct nfp_flower_in_port);
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) {
+ mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
+ mbuf_off_mask += sizeof(struct nfp_flower_ext_meta);
+ }
+
/* Check if this is a tunnel flow and get the inner item*/
is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
if (is_tun_flow)
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 03/25] net/nfp: prepare for the encap action of IPv4 tunnel
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
2022-10-18 3:22 ` [PATCH 01/25] net/nfp: add the offload support of IPv4 VXLAN item Chaoyong He
2022-10-18 3:22 ` [PATCH 02/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 04/25] net/nfp: prepare for the encap action of IPv6 tunnel Chaoyong He
` (23 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the encap action of IPv4 tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 29 ++++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 93 ++++++++++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 88 ++++++++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.h | 27 ++++++++++
4 files changed, 237 insertions(+)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 15d8381..7021d1f 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -246,3 +246,32 @@
return 0;
}
+
+int
+nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v4 *payload)
+{
+ uint16_t cnt;
+ size_t msg_len;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_neigh_v4 *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v4 tun neigh");
+ return -ENOMEM;
+ }
+
+ msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v4);
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH, msg_len);
+ memcpy(msg, payload, msg_len);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 996ba3b..e44e311 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -129,6 +129,36 @@ struct nfp_flower_cmsg_port_mod {
rte_be16_t mtu;
};
+struct nfp_flower_tun_neigh {
+ uint8_t dst_mac[RTE_ETHER_ADDR_LEN];
+ uint8_t src_mac[RTE_ETHER_ADDR_LEN];
+ rte_be32_t port_id;
+};
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V4
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | DST_IPV4 |
+ * +---------------------------------------------------------------+
+ * 1 | SRC_IPV4 |
+ * +---------------------------------------------------------------+
+ * 2 | DST_MAC_B5_B4_B3_B2 |
+ * +-------------------------------+-------------------------------+
+ * 3 | DST_MAC_B1_B0 | SRC_MAC_B5_B4 |
+ * +-------------------------------+-------------------------------+
+ * 4 | SRC_MAC_B3_B2_B1_B0 |
+ * +---------------------------------------------------------------+
+ * 5 | Egress Port (NFP internal) |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_neigh_v4 {
+ rte_be32_t dst_ipv4;
+ rte_be32_t src_ipv4;
+ struct nfp_flower_tun_neigh common;
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -574,6 +604,67 @@ struct nfp_fl_act_set_tport {
rte_be16_t dst_port;
};
+/*
+ * Pre-tunnel
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | - | opcode | |jump_id| - |M| - |V|
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_127_96 / ipv4_daddr |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_95_64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_63_32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_31_0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_fl_act_pre_tun {
+ struct nfp_fl_act_head head;
+ rte_be16_t flags;
+ union {
+ rte_be32_t ipv4_dst;
+ uint8_t ipv6_dst[16];
+ };
+};
+
+/*
+ * Set tunnel
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | res | opcode | res | len_lw| reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_id0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_id1 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved | type |r| idx |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_flags | ttl | tos |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved_cvs1 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved_cvs2 | reserved_cvs3 |
+ * | var_flags | var_np |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_fl_act_set_tun {
+ struct nfp_fl_act_head head;
+ rte_be16_t reserved;
+ rte_be64_t tun_id;
+ rte_be32_t tun_type_index;
+ rte_be16_t tun_flags;
+ uint8_t ttl;
+ uint8_t tos;
+ rte_be16_t outer_vlan_tpid;
+ rte_be16_t outer_vlan_tci;
+ uint8_t tun_len; /* Only valid for NFP_FL_TUNNEL_GENEVE */
+ uint8_t reserved2;
+ rte_be16_t tun_proto; /* Only valid for NFP_FL_TUNNEL_GENEVE */
+} __rte_packed;
+
int nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower);
int nfp_flower_cmsg_repr_reify(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_representor *repr);
@@ -583,5 +674,7 @@ int nfp_flower_cmsg_flow_delete(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
int nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
+int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v4 *payload);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 1673518..6efb95a 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1790,6 +1790,91 @@ struct nfp_mask_id_entry {
tc_hl->reserved = 0;
}
+__rte_unused static void
+nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
+ rte_be32_t ipv4_dst)
+{
+ pre_tun->head.jump_id = NFP_FL_ACTION_OPCODE_PRE_TUNNEL;
+ pre_tun->head.len_lw = sizeof(struct nfp_fl_act_pre_tun) >> NFP_FL_LW_SIZ;
+ pre_tun->ipv4_dst = ipv4_dst;
+}
+
+__rte_unused static void
+nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
+ enum nfp_flower_tun_type tun_type,
+ uint64_t tun_id,
+ uint8_t ttl,
+ uint8_t tos)
+{
+ /* Currently only support one pre-tunnel, so index is always 0. */
+ uint8_t pretun_idx = 0;
+ uint32_t tun_type_index;
+
+ tun_type_index = ((tun_type << 4) & 0xf0) | (pretun_idx & 0x07);
+
+ set_tun->head.jump_id = NFP_FL_ACTION_OPCODE_SET_TUNNEL;
+ set_tun->head.len_lw = sizeof(struct nfp_fl_act_set_tun) >> NFP_FL_LW_SIZ;
+ set_tun->tun_type_index = rte_cpu_to_be_32(tun_type_index);
+ set_tun->tun_id = rte_cpu_to_be_64(tun_id);
+ set_tun->ttl = ttl;
+ set_tun->tos = tos;
+}
+
+__rte_unused static int
+nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun,
+ const struct rte_ether_hdr *eth,
+ const struct rte_flow_item_ipv4 *ipv4)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ tun->payload.v6_flag = 0;
+ tun->payload.dst.dst_ipv4 = ipv4->hdr.dst_addr;
+ tun->payload.src.src_ipv4 = ipv4->hdr.src_addr;
+ memcpy(tun->payload.dst_addr, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ port = (struct nfp_flower_in_port *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4->hdr.dst_addr;
+ payload.src_ipv4 = ipv4->hdr.src_addr;
+ memcpy(payload.common.dst_mac, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
+__rte_unused static int
+nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2491,6 +2576,9 @@ struct nfp_mask_id_entry {
goto free_mask_table;
}
+ /* neighbor next list */
+ LIST_INIT(&priv->nn_list);
+
return 0;
free_mask_table:
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index b3bd949..14da800 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -89,6 +89,11 @@ enum nfp_flower_tun_type {
NFP_FL_TUN_GENEVE = 4,
};
+enum nfp_flow_type {
+ NFP_FLOW_COMMON,
+ NFP_FLOW_ENCAP,
+};
+
struct nfp_fl_key_ls {
uint32_t key_layer_two;
uint8_t key_layer;
@@ -117,6 +122,24 @@ struct nfp_fl_payload {
char *action_data;
};
+struct nfp_fl_tun {
+ LIST_ENTRY(nfp_fl_tun) next;
+ uint8_t ref_cnt;
+ struct nfp_fl_tun_entry {
+ uint8_t v6_flag;
+ uint8_t dst_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t src_addr[RTE_ETHER_ADDR_LEN];
+ union {
+ rte_be32_t dst_ipv4;
+ uint8_t dst_ipv6[16];
+ } dst;
+ union {
+ rte_be32_t src_ipv4;
+ uint8_t src_ipv6[16];
+ } src;
+ } payload;
+};
+
#define CIRC_CNT(head, tail, size) (((head) - (tail)) & ((size) - 1))
#define CIRC_SPACE(head, tail, size) CIRC_CNT((tail), ((head) + 1), (size))
struct circ_buf {
@@ -160,12 +183,16 @@ struct nfp_flow_priv {
struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
struct nfp_fl_stats *stats; /**< Store stats of flow. */
rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+ /* neighbor next */
+ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
struct rte_flow {
struct nfp_fl_payload payload;
+ struct nfp_fl_tun tun;
size_t length;
bool install_flag;
+ enum nfp_flow_type type;
};
int nfp_flow_priv_init(struct nfp_pf_dev *pf_dev);
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 04/25] net/nfp: prepare for the encap action of IPv6 tunnel
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (2 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 03/25] net/nfp: prepare for the encap action of IPv4 tunnel Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 05/25] net/nfp: add the offload support of IPv4 VXLAN encap action Chaoyong He
` (22 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the encap action of IPv6 tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 29 +++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 40 ++++++++++++
drivers/net/nfp/nfp_flow.c | 105 ++++++++++++++++++++++++++++++-
3 files changed, 173 insertions(+), 1 deletion(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 7021d1f..8983178 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -275,3 +275,32 @@
return 0;
}
+
+int
+nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v6 *payload)
+{
+ uint16_t cnt;
+ size_t msg_len;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_neigh_v6 *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v6 tun neigh");
+ return -ENOMEM;
+ }
+
+ msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v6);
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6, msg_len);
+ memcpy(msg, payload, msg_len);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index e44e311..d1e0562 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -160,6 +160,42 @@ struct nfp_flower_cmsg_tun_neigh_v4 {
};
/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | DST_IPV6 [0] |
+ * +---------------------------------------------------------------+
+ * 1 | DST_IPV6 [1] |
+ * +---------------------------------------------------------------+
+ * 2 | DST_IPV6 [2] |
+ * +---------------------------------------------------------------+
+ * 3 | DST_IPV6 [3] |
+ * +---------------------------------------------------------------+
+ * 4 | SRC_IPV6 [0] |
+ * +---------------------------------------------------------------+
+ * 5 | SRC_IPV6 [1] |
+ * +---------------------------------------------------------------+
+ * 6 | SRC_IPV6 [2] |
+ * +---------------------------------------------------------------+
+ * 7 | SRC_IPV6 [3] |
+ * +---------------------------------------------------------------+
+ * 8 | DST_MAC_B5_B4_B3_B2 |
+ * +-------------------------------+-------------------------------+
+ * 9 | DST_MAC_B1_B0 | SRC_MAC_B5_B4 |
+ * +-------------------------------+-------------------------------+
+ * 10 | SRC_MAC_B3_B2_B1_B0 |
+ * +---------------+---------------+---------------+---------------+
+ * 11 | Egress Port (NFP internal) |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_neigh_v6 {
+ uint8_t dst_ipv6[16];
+ uint8_t src_ipv6[16];
+ struct nfp_flower_tun_neigh common;
+};
+
+/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
* -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
@@ -629,6 +665,8 @@ struct nfp_fl_act_pre_tun {
};
};
+#define NFP_FL_PRE_TUN_IPV6 (1 << 0)
+
/*
* Set tunnel
* 3 2 1
@@ -676,5 +714,7 @@ int nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v4 *payload);
+int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v6 *payload);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 6efb95a..b8e666c 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1800,6 +1800,16 @@ struct nfp_mask_id_entry {
}
__rte_unused static void
+nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
+ const uint8_t ipv6_dst[])
+{
+ pre_tun->head.jump_id = NFP_FL_ACTION_OPCODE_PRE_TUNNEL;
+ pre_tun->head.len_lw = sizeof(struct nfp_fl_act_pre_tun) >> NFP_FL_LW_SIZ;
+ pre_tun->flags = rte_cpu_to_be_16(NFP_FL_PRE_TUN_IPV6);
+ memcpy(pre_tun->ipv6_dst, ipv6_dst, sizeof(pre_tun->ipv6_dst));
+}
+
+__rte_unused static void
nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
enum nfp_flower_tun_type tun_type,
uint64_t tun_id,
@@ -1863,7 +1873,7 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
{
@@ -1875,6 +1885,99 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun,
+ const struct rte_ether_hdr *eth,
+ const struct rte_flow_item_ipv6 *ipv6)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ tun->payload.v6_flag = 1;
+ memcpy(tun->payload.dst.dst_ipv6, ipv6->hdr.dst_addr, sizeof(tun->payload.dst.dst_ipv6));
+ memcpy(tun->payload.src.src_ipv6, ipv6->hdr.src_addr, sizeof(tun->payload.src.src_ipv6));
+ memcpy(tun->payload.dst_addr, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ port = (struct nfp_flower_in_port *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6->hdr.dst_addr, sizeof(payload.dst_ipv6));
+ memcpy(payload.src_ipv6, ipv6->hdr.src_addr, sizeof(payload.src_ipv6));
+ memcpy(payload.common.dst_mac, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
+static int
+nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t *ipv6)
+{
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6, sizeof(payload.dst_ipv6));
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
+__rte_unused static int
+nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ bool flag = false;
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+
+ tun = &nfp_flow->tun;
+ LIST_FOREACH(tmp, &app_fw_flower->flow_priv->nn_list, next) {
+ ret = memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry));
+ if (ret == 0) {
+ tmp->ref_cnt--;
+ flag = true;
+ break;
+ }
+ }
+
+ if (!flag) {
+ PMD_DRV_LOG(DEBUG, "Can't find nn entry in the nn list");
+ return -EINVAL;
+ }
+
+ if (tmp->ref_cnt == 0) {
+ LIST_REMOVE(tmp, next);
+ if (tmp->payload.v6_flag != 0) {
+ return nfp_flower_del_tun_neigh_v6(app_fw_flower,
+ tmp->payload.dst.dst_ipv6);
+ } else {
+ return nfp_flower_del_tun_neigh_v4(app_fw_flower,
+ tmp->payload.dst.dst_ipv4);
+ }
+ }
+
+ return 0;
+}
+
static int
nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 05/25] net/nfp: add the offload support of IPv4 VXLAN encap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (3 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 04/25] net/nfp: prepare for the encap action of IPv6 tunnel Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 06/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (21 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 132 +++++++++++++++++++++++++++++++++++++--
2 files changed, 128 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 40b9a4d..fbfd5ba 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -58,3 +58,4 @@ set_mac_src = Y
set_tp_dst = Y
set_tp_src = Y
set_ttl = Y
+vxlan_encap = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index b8e666c..2f04fdf 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -10,8 +10,10 @@
#include <rte_malloc.h>
#include "nfp_common.h"
+#include "nfp_ctrl.h"
#include "nfp_flow.h"
#include "nfp_logs.h"
+#include "nfp_rxtx.h"
#include "flower/nfp_flower.h"
#include "flower/nfp_flower_cmsg.h"
#include "flower/nfp_flower_ctrl.h"
@@ -19,6 +21,17 @@
#include "nfpcore/nfp_mip.h"
#include "nfpcore/nfp_rtsym.h"
+/*
+ * Maximum number of items in struct rte_flow_action_vxlan_encap.
+ * ETH / IPv4(6) / UDP / VXLAN / END
+ */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 5
+
+struct vxlan_data {
+ struct rte_flow_action_vxlan_encap conf;
+ struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+};
+
/* Static initializer for a list of subsequent item types */
#define NEXT_ITEM(...) \
((const enum rte_flow_item_type []){ \
@@ -742,6 +755,11 @@ struct nfp_mask_id_entry {
tc_hl_flag = true;
}
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP detected");
+ key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
+ key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1790,7 +1808,7 @@ struct nfp_mask_id_entry {
tc_hl->reserved = 0;
}
-__rte_unused static void
+static void
nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
rte_be32_t ipv4_dst)
{
@@ -1809,7 +1827,7 @@ struct nfp_mask_id_entry {
memcpy(pre_tun->ipv6_dst, ipv6_dst, sizeof(pre_tun->ipv6_dst));
}
-__rte_unused static void
+static void
nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
enum nfp_flower_tun_type tun_type,
uint64_t tun_id,
@@ -1830,7 +1848,7 @@ struct nfp_mask_id_entry {
set_tun->tos = tos;
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct nfp_fl_tun *tun,
@@ -1940,7 +1958,7 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -1979,7 +1997,81 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
+nfp_flow_action_vxlan_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct vxlan_data *vxlan_data,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ const struct rte_flow_item_eth *eth;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_vxlan *vxlan;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_flow_item_eth *)vxlan_data->items[0].spec;
+ ipv4 = (const struct rte_flow_item_ipv4 *)vxlan_data->items[1].spec;
+ vxlan = (const struct rte_flow_item_vxlan *)vxlan_data->items[3].spec;
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_VXLAN, vxlan->hdr.vx_vni,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_flags = vxlan->hdr.vx_flags;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, ð->hdr, ipv4);
+}
+
+static int
+nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ size_t act_len;
+ size_t act_pre_size;
+ const struct vxlan_data *vxlan_data;
+
+ vxlan_data = action->conf;
+ if (vxlan_data->items[0].type != RTE_FLOW_ITEM_TYPE_ETH ||
+ vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ vxlan_data->items[2].type != RTE_FLOW_ITEM_TYPE_UDP ||
+ vxlan_data->items[3].type != RTE_FLOW_ITEM_TYPE_VXLAN ||
+ vxlan_data->items[4].type != RTE_FLOW_ITEM_TYPE_END) {
+ PMD_DRV_LOG(ERR, "Not an valid vxlan action conf.");
+ return -EINVAL;
+ }
+
+ /*
+ * Pre_tunnel action must be the first on the action list.
+ * If other actions already exist, they need to be pushed forward.
+ */
+ act_len = act_data - actions;
+ if (act_len != 0) {
+ act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ memmove(actions + act_pre_size, actions, act_len);
+ }
+
+ if (vxlan_data->items[1].type == RTE_FLOW_ITEM_TYPE_IPV4)
+ return nfp_flow_action_vxlan_encap_v4(app_fw_flower, act_data,
+ actions, vxlan_data, nfp_flow_meta, tun);
+
+ return 0;
+}
+
+static int
+nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
{
@@ -2142,6 +2234,20 @@ struct nfp_mask_id_entry {
tc_hl_flag = true;
}
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP");
+ ret = nfp_flow_action_vxlan_encap(representor->app_fw_flower,
+ position, action_data, action, nfp_flow_meta,
+ &nfp_flow->tun);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process"
+ " RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP");
+ return ret;
+ }
+ position += sizeof(struct nfp_fl_act_pre_tun);
+ position += sizeof(struct nfp_fl_act_set_tun);
+ nfp_flow->type = NFP_FLOW_ENCAP;
+ break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
return -ENOTSUP;
@@ -2418,6 +2524,22 @@ struct nfp_mask_id_entry {
goto exit;
}
+ switch (nfp_flow->type) {
+ case NFP_FLOW_COMMON:
+ break;
+ case NFP_FLOW_ENCAP:
+ /* Delete the entry from nn table */
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Invalid nfp flow type %d.", nfp_flow->type);
+ ret = -EINVAL;
+ break;
+ }
+
+ if (ret != 0)
+ goto exit;
+
/* Delete the flow from hardware */
if (nfp_flow->install_flag) {
ret = nfp_flower_cmsg_flow_delete(app_fw_flower, nfp_flow);
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 06/25] net/nfp: add the offload support of IPv6 VXLAN encap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (4 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 05/25] net/nfp: add the offload support of IPv4 VXLAN encap action Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 07/25] net/nfp: prepare for the decap action of IPv4 UDP tunnel Chaoyong He
` (20 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv6 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/rel_notes/release_22_11.rst | 1 +
drivers/net/nfp/nfp_flow.c | 48 ++++++++++++++++++++++++++++++----
2 files changed, 44 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index c2bf721..351fb02 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -184,6 +184,7 @@ New Features
* Set the port number
* Set the TTL
* Set the DSCP of IPv4 and IPv6
+ * Encap of VXLAN tunnel
* **Updated NXP dpaa2 driver.**
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 2f04fdf..b9c37b6 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1817,7 +1817,7 @@ struct nfp_mask_id_entry {
pre_tun->ipv4_dst = ipv4_dst;
}
-__rte_unused static void
+static void
nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
const uint8_t ipv6_dst[])
{
@@ -1903,7 +1903,7 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct nfp_fl_tun *tun,
@@ -2032,6 +2032,42 @@ struct nfp_mask_id_entry {
}
static int
+nfp_flow_action_vxlan_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct vxlan_data *vxlan_data,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ const struct rte_flow_item_eth *eth;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_vxlan *vxlan;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_flow_item_eth *)vxlan_data->items[0].spec;
+ ipv6 = (const struct rte_flow_item_ipv6 *)vxlan_data->items[1].spec;
+ vxlan = (const struct rte_flow_item_vxlan *)vxlan_data->items[3].spec;
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_VXLAN, vxlan->hdr.vx_vni,
+ ipv6->hdr.hop_limits,
+ (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff);
+ set_tun->tun_flags = vxlan->hdr.vx_flags;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, ð->hdr, ipv6);
+}
+
+static int
nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2045,7 +2081,8 @@ struct nfp_mask_id_entry {
vxlan_data = action->conf;
if (vxlan_data->items[0].type != RTE_FLOW_ITEM_TYPE_ETH ||
- vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ (vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV6) ||
vxlan_data->items[2].type != RTE_FLOW_ITEM_TYPE_UDP ||
vxlan_data->items[3].type != RTE_FLOW_ITEM_TYPE_VXLAN ||
vxlan_data->items[4].type != RTE_FLOW_ITEM_TYPE_END) {
@@ -2066,8 +2103,9 @@ struct nfp_mask_id_entry {
if (vxlan_data->items[1].type == RTE_FLOW_ITEM_TYPE_IPV4)
return nfp_flow_action_vxlan_encap_v4(app_fw_flower, act_data,
actions, vxlan_data, nfp_flow_meta, tun);
-
- return 0;
+ else
+ return nfp_flow_action_vxlan_encap_v6(app_fw_flower, act_data,
+ actions, vxlan_data, nfp_flow_meta, tun);
}
static int
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 07/25] net/nfp: prepare for the decap action of IPv4 UDP tunnel
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (5 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 06/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 08/25] net/nfp: prepare for the decap action of IPv6 " Chaoyong He
` (19 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the decap action of IPv4 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 118 ++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 94 +++++++
drivers/net/nfp/nfp_flow.c | 461 ++++++++++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 17 ++
4 files changed, 675 insertions(+), 15 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 8983178..f18f3de 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -304,3 +304,121 @@
return 0;
}
+
+int
+nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower)
+{
+ uint16_t cnt;
+ uint32_t count = 0;
+ struct rte_mbuf *mbuf;
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_flower_cmsg_tun_ipv4_addr *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v4 tun addr");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_IPS, sizeof(*msg));
+
+ priv = app_fw_flower->flow_priv;
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (count >= NFP_FL_IPV4_ADDRS_MAX) {
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ PMD_DRV_LOG(ERR, "IPv4 offload exceeds limit.");
+ return -ERANGE;
+ }
+ msg->ipv4_addr[count] = entry->ipv4_addr;
+ count++;
+ }
+ msg->count = rte_cpu_to_be_32(count);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ uint16_t mac_idx,
+ bool is_del)
+{
+ uint16_t cnt;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_pre_tun_rule *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for pre tunnel rule");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_PRE_TUN_RULE, sizeof(*msg));
+
+ meta_tci = (struct nfp_flower_meta_tci *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata));
+ if (meta_tci->tci)
+ msg->vlan_tci = meta_tci->tci;
+ else
+ msg->vlan_tci = 0xffff;
+
+ if (is_del)
+ msg->flags = rte_cpu_to_be_32(NFP_TUN_PRE_TUN_RULE_DEL);
+
+ msg->port_idx = rte_cpu_to_be_16(mac_idx);
+ msg->host_ctx_id = nfp_flow_meta->host_ctx_id;
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+nfp_flower_cmsg_tun_mac_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_ether_addr *mac,
+ uint16_t mac_idx,
+ bool is_del)
+{
+ uint16_t cnt;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_mac *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for tunnel mac");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_MAC, sizeof(*msg));
+
+ msg->count = rte_cpu_to_be_16(1);
+ msg->index = rte_cpu_to_be_16(mac_idx);
+ rte_ether_addr_copy(mac, &msg->addr);
+ if (is_del)
+ msg->flags = rte_cpu_to_be_16(NFP_TUN_MAC_OFFLOAD_DEL_FLAG);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index d1e0562..0933dac 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -195,6 +195,91 @@ struct nfp_flower_cmsg_tun_neigh_v6 {
struct nfp_flower_tun_neigh common;
};
+#define NFP_TUN_PRE_TUN_RULE_DEL (1 << 0)
+#define NFP_TUN_PRE_TUN_IDX_BIT (1 << 3)
+#define NFP_TUN_PRE_TUN_IPV6_BIT (1 << 7)
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_PRE_TUN_RULE
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | FLAGS |
+ * +---------------------------------------------------------------+
+ * 1 | MAC_IDX | VLAN_ID |
+ * +---------------------------------------------------------------+
+ * 2 | HOST_CTX |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_pre_tun_rule {
+ rte_be32_t flags;
+ rte_be16_t port_idx;
+ rte_be16_t vlan_tci;
+ rte_be32_t host_ctx_id;
+};
+
+#define NFP_TUN_MAC_OFFLOAD_DEL_FLAG 0x2
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_MAC
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * Word +-----------------------+---+-+-+---------------+---------------+
+ * 0 | spare |NBI|D|F| Amount of MAC’s in this msg |
+ * +---------------+-------+---+-+-+---------------+---------------+
+ * 1 | Index 0 | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 2 | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ * 3 | Index 1 | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 4 | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ * ...
+ * +---------------+---------------+---------------+---------------+
+ * 2N-1 | Index N | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 2N | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ *
+ * F: Flush bit. Set if entire table must be flushed. Rest of info in cmsg
+ * will be ignored. Not implemented.
+ * D: Delete bit. Set if entry must be deleted instead of added
+ * NBI: Network Block Interface. Set to 0
+ * The amount of MAC’s per control message is limited only by the packet
+ * buffer size. A 2048B buffer can fit 253 MAC address and a 10240B buffer
+ * 1277 MAC addresses.
+ */
+struct nfp_flower_cmsg_tun_mac {
+ rte_be16_t flags;
+ rte_be16_t count; /**< Should always be 1 */
+ rte_be16_t index;
+ struct rte_ether_addr addr;
+};
+
+#define NFP_FL_IPV4_ADDRS_MAX 32
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_IPS
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | Number of IP Addresses |
+ * +---------------------------------------------------------------+
+ * 1 | IP Address #1 |
+ * +---------------------------------------------------------------+
+ * 2 | IP Address #2 |
+ * +---------------------------------------------------------------+
+ * | ... |
+ * +---------------------------------------------------------------+
+ * 32 | IP Address #32 |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_ipv4_addr {
+ rte_be32_t count;
+ rte_be32_t ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -716,5 +801,14 @@ int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v4 *payload);
int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v6 *payload);
+int nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower);
+int nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ uint16_t mac_idx,
+ bool is_del);
+int nfp_flower_cmsg_tun_mac_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_ether_addr *mac,
+ uint16_t mac_idx,
+ bool is_del);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index b9c37b6..816c733 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -47,7 +47,8 @@ struct nfp_flow_item_proc {
/* Size in bytes for @p mask_support and @p mask_default. */
const unsigned int mask_sz;
/* Merge a pattern item into a flow rule handle. */
- int (*merge)(struct rte_flow *nfp_flow,
+ int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -63,6 +64,12 @@ struct nfp_mask_id_entry {
uint8_t mask_id;
};
+struct nfp_pre_tun_entry {
+ uint16_t mac_index;
+ uint16_t ref_cnt;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+} __rte_aligned(32);
+
static inline struct nfp_flow_priv *
nfp_flow_dev_to_priv(struct rte_eth_dev *dev)
{
@@ -417,6 +424,83 @@ struct nfp_mask_id_entry {
return 0;
}
+__rte_unused static int
+nfp_tun_add_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_ipv4_addr_entry *tmp_entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count++;
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ return 0;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ tmp_entry = rte_zmalloc("nfp_ipv4_off", sizeof(struct nfp_ipv4_addr_entry), 0);
+ if (tmp_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Mem error when offloading IP address.");
+ return -ENOMEM;
+ }
+
+ tmp_entry->ipv4_addr = ipv4;
+ tmp_entry->ref_count = 1;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_INSERT_HEAD(&priv->ipv4_off_list, tmp_entry, next);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ return nfp_flower_cmsg_tun_off_v4(app_fw_flower);
+}
+
+static int
+nfp_tun_del_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count--;
+ if (entry->ref_count == 0) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ return nfp_flower_cmsg_tun_off_v4(app_fw_flower);
+ }
+ break;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ return 0;
+}
+
+static int
+nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ struct nfp_flower_ipv4_udp_tun *udp4;
+
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+
+ return ret;
+}
+
static void
nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
{
@@ -653,6 +737,9 @@ struct nfp_mask_id_entry {
case RTE_FLOW_ACTION_TYPE_RSS:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RSS detected");
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_JUMP detected");
+ break;
case RTE_FLOW_ACTION_TYPE_PORT_ID:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_PORT_ID detected");
key_ls->act_size += sizeof(struct nfp_fl_act_output);
@@ -804,7 +891,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
+nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -841,7 +929,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_vlan(struct rte_flow *nfp_flow,
+nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
__rte_unused char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -871,7 +960,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_ipv4(struct rte_flow *nfp_flow,
+nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -932,7 +1022,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_ipv6(struct rte_flow *nfp_flow,
+nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -997,7 +1088,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_tcp(struct rte_flow *nfp_flow,
+nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1070,7 +1162,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_udp(struct rte_flow *nfp_flow,
+nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1118,7 +1211,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
+nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1160,7 +1254,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_vxlan(struct rte_flow *nfp_flow,
+nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1409,7 +1504,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_compile_item_proc(const struct rte_flow_item items[],
+nfp_flow_compile_item_proc(struct nfp_flower_representor *repr,
+ const struct rte_flow_item items[],
struct rte_flow *nfp_flow,
char **mbuf_off_exact,
char **mbuf_off_mask,
@@ -1420,6 +1516,7 @@ struct nfp_mask_id_entry {
bool continue_flag = true;
const struct rte_flow_item *item;
const struct nfp_flow_item_proc *proc_list;
+ struct nfp_app_fw_flower *app_fw_flower = repr->app_fw_flower;
proc_list = nfp_flow_item_proc_list;
for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) {
@@ -1455,14 +1552,14 @@ struct nfp_mask_id_entry {
break;
}
- ret = proc->merge(nfp_flow, mbuf_off_exact, item,
+ ret = proc->merge(app_fw_flower, nfp_flow, mbuf_off_exact, item,
proc, false, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type);
break;
}
- ret = proc->merge(nfp_flow, mbuf_off_mask, item,
+ ret = proc->merge(app_fw_flower, nfp_flow, mbuf_off_mask, item,
proc, true, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type);
@@ -1476,7 +1573,7 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
+nfp_flow_compile_items(struct nfp_flower_representor *representor,
const struct rte_flow_item items[],
struct rte_flow *nfp_flow)
{
@@ -1507,7 +1604,7 @@ struct nfp_mask_id_entry {
is_outer_layer = false;
/* Go over items */
- ret = nfp_flow_compile_item_proc(loop_item, nfp_flow,
+ ret = nfp_flow_compile_item_proc(representor, loop_item, nfp_flow,
&mbuf_off_exact, &mbuf_off_mask, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item compile failed.");
@@ -1516,7 +1613,7 @@ struct nfp_mask_id_entry {
/* Go over inner items */
if (is_tun_flow) {
- ret = nfp_flow_compile_item_proc(items, nfp_flow,
+ ret = nfp_flow_compile_item_proc(representor, items, nfp_flow,
&mbuf_off_exact, &mbuf_off_mask, true);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow outer item compile failed.");
@@ -1891,6 +1988,59 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_ipv4 *ipv4;
+ struct nfp_flower_mac_mpls *eth;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ port = (struct nfp_flower_in_port *)(meta_tci + 1);
+ eth = (struct nfp_flower_mac_mpls *)(port + 1);
+
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls) +
+ sizeof(struct nfp_flower_tp_ports));
+ else
+ ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls));
+
+ tun = &nfp_flow->tun;
+ tun->payload.v6_flag = 0;
+ tun->payload.dst.dst_ipv4 = ipv4->ipv4_src;
+ tun->payload.src.src_ipv4 = ipv4->ipv4_dst;
+ memcpy(tun->payload.dst_addr, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4->ipv4_src;
+ payload.src_ipv4 = ipv4->ipv4_dst;
+ memcpy(payload.common.dst_mac, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
@@ -2108,6 +2258,200 @@ struct nfp_mask_id_entry {
actions, vxlan_data, nfp_flow_meta, tun);
}
+static struct nfp_pre_tun_entry *
+nfp_pre_tun_table_search(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int index;
+ uint32_t hash_key;
+ struct nfp_pre_tun_entry *mac_index;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ index = rte_hash_lookup_data(priv->pre_tun_table, &hash_key, (void **)&mac_index);
+ if (index < 0) {
+ PMD_DRV_LOG(DEBUG, "Data NOT found in the hash table");
+ return NULL;
+ }
+
+ return mac_index;
+}
+
+static bool
+nfp_pre_tun_table_add(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int ret;
+ uint32_t hash_key;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ ret = rte_hash_add_key_data(priv->pre_tun_table, &hash_key, hash_data);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Add to pre tunnel table failed");
+ return false;
+ }
+
+ return true;
+}
+
+static bool
+nfp_pre_tun_table_delete(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int ret;
+ uint32_t hash_key;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ ret = rte_hash_del_key(priv->pre_tun_table, &hash_key);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Delete from pre tunnel table failed");
+ return false;
+ }
+
+ return true;
+}
+
+__rte_unused static int
+nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
+ uint16_t *index)
+{
+ uint16_t i;
+ uint32_t entry_size;
+ uint16_t mac_index = 1;
+ struct nfp_flow_priv *priv;
+ struct nfp_pre_tun_entry *entry;
+ struct nfp_pre_tun_entry *find_entry;
+
+ priv = repr->app_fw_flower->flow_priv;
+ if (priv->pre_tun_cnt >= NFP_TUN_PRE_TUN_RULE_LIMIT) {
+ PMD_DRV_LOG(ERR, "Pre tunnel table has full");
+ return -EINVAL;
+ }
+
+ entry_size = sizeof(struct nfp_pre_tun_entry);
+ entry = rte_zmalloc("nfp_pre_tun", entry_size, 0);
+ if (entry == NULL) {
+ PMD_DRV_LOG(ERR, "Memory alloc failed for pre tunnel table");
+ return -ENOMEM;
+ }
+
+ entry->ref_cnt = 1U;
+ memcpy(entry->mac_addr, repr->mac_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* 0 is considered a failed match */
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0)
+ continue;
+ entry->mac_index = i;
+ find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
+ if (find_entry != NULL) {
+ find_entry->ref_cnt++;
+ *index = find_entry->mac_index;
+ rte_free(entry);
+ return 0;
+ }
+ }
+
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0) {
+ priv->pre_tun_bitmap[i] = 1U;
+ mac_index = i;
+ break;
+ }
+ }
+
+ entry->mac_index = mac_index;
+ if (!nfp_pre_tun_table_add(priv, (char *)entry, entry_size)) {
+ rte_free(entry);
+ return -EINVAL;
+ }
+
+ *index = entry->mac_index;
+ priv->pre_tun_cnt++;
+ return 0;
+}
+
+static int
+nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
+ struct rte_flow *nfp_flow)
+{
+ uint16_t i;
+ int ret = 0;
+ uint32_t entry_size;
+ uint16_t nfp_mac_idx;
+ struct nfp_flow_priv *priv;
+ struct nfp_pre_tun_entry *entry;
+ struct nfp_pre_tun_entry *find_entry;
+ struct nfp_fl_rule_metadata *nfp_flow_meta;
+
+ priv = repr->app_fw_flower->flow_priv;
+ if (priv->pre_tun_cnt == 1)
+ return 0;
+
+ entry_size = sizeof(struct nfp_pre_tun_entry);
+ entry = rte_zmalloc("nfp_pre_tun", entry_size, 0);
+ if (entry == NULL) {
+ PMD_DRV_LOG(ERR, "Memory alloc failed for pre tunnel table");
+ return -ENOMEM;
+ }
+
+ entry->ref_cnt = 1U;
+ memcpy(entry->mac_addr, repr->mac_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* 0 is considered a failed match */
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0)
+ continue;
+ entry->mac_index = i;
+ find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
+ if (find_entry != NULL) {
+ find_entry->ref_cnt--;
+ if (find_entry->ref_cnt != 0)
+ goto free_entry;
+ priv->pre_tun_bitmap[i] = 0;
+ break;
+ }
+ }
+
+ nfp_flow_meta = nfp_flow->payload.meta;
+ nfp_mac_idx = (find_entry->mac_index << 8) |
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
+ NFP_TUN_PRE_TUN_IDX_BIT;
+ ret = nfp_flower_cmsg_tun_mac_rule(repr->app_fw_flower, &repr->mac_addr,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send tunnel mac rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ find_entry->ref_cnt = 1U;
+ if (!nfp_pre_tun_table_delete(priv, (char *)find_entry, entry_size)) {
+ PMD_DRV_LOG(ERR, "Delete entry from pre tunnel table failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ rte_free(entry);
+ rte_free(find_entry);
+ priv->pre_tun_cnt--;
+
+free_entry:
+ rte_free(entry);
+
+ return ret;
+}
+
static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2149,6 +2493,9 @@ struct nfp_mask_id_entry {
case RTE_FLOW_ACTION_TYPE_RSS:
PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_RSS");
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_JUMP");
+ break;
case RTE_FLOW_ACTION_TYPE_PORT_ID:
PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_PORT_ID");
ret = nfp_flow_action_output(position, action, nfp_flow_meta);
@@ -2569,6 +2916,15 @@ struct nfp_mask_id_entry {
/* Delete the entry from nn table */
ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
break;
+ case NFP_FLOW_DECAP:
+ /* Delete the entry from nn table */
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ if (ret != 0)
+ goto exit;
+
+ /* Delete the entry in pre tunnel table */
+ ret = nfp_pre_tun_table_check_del(representor, nfp_flow);
+ break;
default:
PMD_DRV_LOG(ERR, "Invalid nfp flow type %d.", nfp_flow->type);
ret = -EINVAL;
@@ -2578,6 +2934,10 @@ struct nfp_mask_id_entry {
if (ret != 0)
goto exit;
+ /* Delete the ip off */
+ if (nfp_flow_is_tunnel(nfp_flow))
+ nfp_tun_check_ip_off_del(representor, nfp_flow);
+
/* Delete the flow from hardware */
if (nfp_flow->install_flag) {
ret = nfp_flower_cmsg_flow_delete(app_fw_flower, nfp_flow);
@@ -2707,6 +3067,49 @@ struct nfp_mask_id_entry {
return 0;
}
+static int
+nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
+ struct rte_flow_tunnel *tunnel,
+ struct rte_flow_action **pmd_actions,
+ uint32_t *num_of_actions,
+ __rte_unused struct rte_flow_error *err)
+{
+ struct rte_flow_action *nfp_action;
+
+ nfp_action = rte_zmalloc("nfp_tun_action", sizeof(struct rte_flow_action), 0);
+ if (nfp_action == NULL) {
+ PMD_DRV_LOG(ERR, "Alloc memory for nfp tunnel action failed.");
+ return -ENOMEM;
+ }
+
+ switch (tunnel->type) {
+ default:
+ *pmd_actions = NULL;
+ *num_of_actions = 0;
+ rte_free(nfp_action);
+ break;
+ }
+
+ return 0;
+}
+
+static int
+nfp_flow_tunnel_action_decap_release(__rte_unused struct rte_eth_dev *dev,
+ struct rte_flow_action *pmd_actions,
+ uint32_t num_of_actions,
+ __rte_unused struct rte_flow_error *err)
+{
+ uint32_t i;
+ struct rte_flow_action *nfp_action;
+
+ for (i = 0; i < num_of_actions; i++) {
+ nfp_action = &pmd_actions[i];
+ rte_free(nfp_action);
+ }
+
+ return 0;
+}
+
static const struct rte_flow_ops nfp_flow_ops = {
.validate = nfp_flow_validate,
.create = nfp_flow_create,
@@ -2715,6 +3118,8 @@ struct nfp_mask_id_entry {
.query = nfp_flow_query,
.tunnel_match = nfp_flow_tunnel_match,
.tunnel_item_release = nfp_flow_tunnel_item_release,
+ .tunnel_decap_set = nfp_flow_tunnel_decap_set,
+ .tunnel_action_decap_release = nfp_flow_tunnel_action_decap_release,
};
int
@@ -2759,6 +3164,15 @@ struct nfp_mask_id_entry {
.extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY,
};
+ struct rte_hash_parameters pre_tun_hash_params = {
+ .name = "pre_tunnel_table",
+ .entries = 32,
+ .hash_func = rte_jhash,
+ .socket_id = rte_socket_id(),
+ .key_len = sizeof(uint32_t),
+ .extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY,
+ };
+
ctx_count = nfp_rtsym_read_le(pf_dev->sym_tbl,
"CONFIG_FC_HOST_CTX_COUNT", &ret);
if (ret < 0) {
@@ -2839,11 +3253,27 @@ struct nfp_mask_id_entry {
goto free_mask_table;
}
+ /* pre tunnel table */
+ priv->pre_tun_cnt = 1;
+ pre_tun_hash_params.hash_func_init_val = priv->hash_seed;
+ priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params);
+ if (priv->pre_tun_table == NULL) {
+ PMD_INIT_LOG(ERR, "Pre tunnel table creation failed");
+ ret = -ENOMEM;
+ goto free_flow_table;
+ }
+
+ /* ipv4 off list */
+ rte_spinlock_init(&priv->ipv4_off_lock);
+ LIST_INIT(&priv->ipv4_off_list);
+
/* neighbor next list */
LIST_INIT(&priv->nn_list);
return 0;
+free_flow_table:
+ rte_hash_free(priv->flow_table);
free_mask_table:
rte_free(priv->mask_table);
free_stats:
@@ -2867,6 +3297,7 @@ struct nfp_mask_id_entry {
app_fw_flower = NFP_PRIV_TO_APP_FW_FLOWER(pf_dev->app_fw_priv);
priv = app_fw_flower->flow_priv;
+ rte_hash_free(priv->pre_tun_table);
rte_hash_free(priv->flow_table);
rte_hash_free(priv->mask_table);
rte_free(priv->stats);
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 14da800..84a3005 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -6,6 +6,7 @@
#ifndef _NFP_FLOW_H_
#define _NFP_FLOW_H_
+#include <sys/queue.h>
#include <ethdev_driver.h>
#define NFP_FLOWER_LAYER_EXT_META (1 << 0)
@@ -92,6 +93,7 @@ enum nfp_flower_tun_type {
enum nfp_flow_type {
NFP_FLOW_COMMON,
NFP_FLOW_ENCAP,
+ NFP_FLOW_DECAP,
};
struct nfp_fl_key_ls {
@@ -168,6 +170,14 @@ struct nfp_fl_stats {
uint64_t bytes;
};
+struct nfp_ipv4_addr_entry {
+ LIST_ENTRY(nfp_ipv4_addr_entry) next;
+ rte_be32_t ipv4_addr;
+ int ref_count;
+};
+
+#define NFP_TUN_PRE_TUN_RULE_LIMIT 32
+
struct nfp_flow_priv {
uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
uint64_t flower_version; /**< Flow version, always increase. */
@@ -183,6 +193,13 @@ struct nfp_flow_priv {
struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
struct nfp_fl_stats *stats; /**< Store stats of flow. */
rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+ /* pre tunnel rule */
+ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
+ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
+ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
+ /* IPv4 off */
+ LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
+ rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
/* neighbor next */
LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 08/25] net/nfp: prepare for the decap action of IPv6 UDP tunnel
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (6 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 07/25] net/nfp: prepare for the decap action of IPv4 UDP tunnel Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 09/25] net/nfp: add the offload support of IPv4 VXLAN decap action Chaoyong He
` (18 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the decap action of IPv6 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 42 +++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 24 +++++
drivers/net/nfp/nfp_flow.c | 145 ++++++++++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 9 ++
4 files changed, 217 insertions(+), 3 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index f18f3de..76815cf 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -348,6 +348,48 @@
}
int
+nfp_flower_cmsg_tun_off_v6(struct nfp_app_fw_flower *app_fw_flower)
+{
+ uint16_t cnt;
+ uint32_t count = 0;
+ struct rte_mbuf *mbuf;
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+ struct nfp_flower_cmsg_tun_ipv6_addr *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v6 tun addr");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_IPS_V6, sizeof(*msg));
+
+ priv = app_fw_flower->flow_priv;
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (count >= NFP_FL_IPV6_ADDRS_MAX) {
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ PMD_DRV_LOG(ERR, "IPv6 offload exceeds limit.");
+ return -ERANGE;
+ }
+ memcpy(&msg->ipv6_addr[count * 16], entry->ipv6_addr, 16UL);
+ count++;
+ }
+ msg->count = rte_cpu_to_be_32(count);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
uint16_t mac_idx,
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 0933dac..61f2f83 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -280,6 +280,29 @@ struct nfp_flower_cmsg_tun_ipv4_addr {
rte_be32_t ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
};
+#define NFP_FL_IPV6_ADDRS_MAX 4
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_IP_V6
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | Number of IP Addresses |
+ * +---------------------------------------------------------------+
+ * 1 | IP Address1 #1 |
+ * +---------------------------------------------------------------+
+ * 2 | IP Address1 #2 |
+ * +---------------------------------------------------------------+
+ * | ... |
+ * +---------------------------------------------------------------+
+ * 16 | IP Address4 #4 |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_ipv6_addr {
+ rte_be32_t count;
+ uint8_t ipv6_addr[NFP_FL_IPV6_ADDRS_MAX * 16];
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -802,6 +825,7 @@ int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v6 *payload);
int nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower);
+int nfp_flower_cmsg_tun_off_v6(struct nfp_app_fw_flower *app_fw_flower);
int nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
uint16_t mac_idx,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 816c733..cc63aa5 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -487,16 +487,95 @@ struct nfp_pre_tun_entry {
return 0;
}
+__rte_unused static int
+nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t ipv6[])
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+ struct nfp_ipv6_addr_entry *tmp_entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+ entry->ref_count++;
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ return 0;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ tmp_entry = rte_zmalloc("nfp_ipv6_off", sizeof(struct nfp_ipv6_addr_entry), 0);
+ if (tmp_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address.");
+ return -ENOMEM;
+ }
+ memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr));
+ tmp_entry->ref_count = 1;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_INSERT_HEAD(&priv->ipv6_off_list, tmp_entry, next);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ return nfp_flower_cmsg_tun_off_v6(app_fw_flower);
+}
+
+static int
+nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t ipv6[])
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+ entry->ref_count--;
+ if (entry->ref_count == 0) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ return nfp_flower_cmsg_tun_off_v6(app_fw_flower);
+ }
+ break;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ return 0;
+}
+
static int
nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
struct rte_flow *nfp_flow)
{
int ret;
+ uint32_t key_layer2 = 0;
struct nfp_flower_ipv4_udp_tun *udp4;
+ struct nfp_flower_ipv6_udp_tun *udp6;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
- udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv4_udp_tun));
- ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
+
+ if (ext_meta != NULL)
+ key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
+
+ if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
+ udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_udp_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ } else {
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ }
return ret;
}
@@ -2096,6 +2175,59 @@ struct nfp_pre_tun_entry {
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_ipv6 *ipv6;
+ struct nfp_flower_mac_mpls *eth;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ port = (struct nfp_flower_in_port *)(meta_tci + 1);
+ eth = (struct nfp_flower_mac_mpls *)(port + 1);
+
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls) +
+ sizeof(struct nfp_flower_tp_ports));
+ else
+ ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls));
+
+ tun = &nfp_flow->tun;
+ tun->payload.v6_flag = 1;
+ memcpy(tun->payload.dst.dst_ipv6, ipv6->ipv6_src, sizeof(tun->payload.dst.dst_ipv6));
+ memcpy(tun->payload.src.src_ipv6, ipv6->ipv6_dst, sizeof(tun->payload.src.src_ipv6));
+ memcpy(tun->payload.dst_addr, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6->ipv6_src, sizeof(payload.dst_ipv6));
+ memcpy(payload.src_ipv6, ipv6->ipv6_dst, sizeof(payload.src_ipv6));
+ memcpy(payload.common.dst_mac, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
uint8_t *ipv6)
@@ -2419,6 +2551,9 @@ struct nfp_pre_tun_entry {
nfp_mac_idx = (find_entry->mac_index << 8) |
NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
NFP_TUN_PRE_TUN_IDX_BIT;
+ if (nfp_flow->tun.payload.v6_flag != 0)
+ nfp_mac_idx |= NFP_TUN_PRE_TUN_IPV6_BIT;
+
ret = nfp_flower_cmsg_tun_mac_rule(repr->app_fw_flower, &repr->mac_addr,
nfp_mac_idx, true);
if (ret != 0) {
@@ -3267,6 +3402,10 @@ struct nfp_pre_tun_entry {
rte_spinlock_init(&priv->ipv4_off_lock);
LIST_INIT(&priv->ipv4_off_list);
+ /* ipv6 off list */
+ rte_spinlock_init(&priv->ipv6_off_lock);
+ LIST_INIT(&priv->ipv6_off_list);
+
/* neighbor next list */
LIST_INIT(&priv->nn_list);
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 84a3005..1b4a51f 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -176,6 +176,12 @@ struct nfp_ipv4_addr_entry {
int ref_count;
};
+struct nfp_ipv6_addr_entry {
+ LIST_ENTRY(nfp_ipv6_addr_entry) next;
+ uint8_t ipv6_addr[16];
+ int ref_count;
+};
+
#define NFP_TUN_PRE_TUN_RULE_LIMIT 32
struct nfp_flow_priv {
@@ -200,6 +206,9 @@ struct nfp_flow_priv {
/* IPv4 off */
LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
+ /* IPv6 off */
+ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
+ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
/* neighbor next */
LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 09/25] net/nfp: add the offload support of IPv4 VXLAN decap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (7 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 08/25] net/nfp: prepare for the decap action of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 10/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (17 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 75 +++++++++++++++++++++++++++++++++++++---
2 files changed, 71 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index fbfd5ba..3b5b052 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -58,4 +58,5 @@ set_mac_src = Y
set_tp_dst = Y
set_tp_src = Y
set_ttl = Y
+vxlan_decap = Y
vxlan_encap = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index cc63aa5..0578679 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -424,7 +424,7 @@ struct nfp_pre_tun_entry {
return 0;
}
-__rte_unused static int
+static int
nfp_tun_add_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
{
@@ -926,6 +926,9 @@ struct nfp_pre_tun_entry {
key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1333,7 +1336,7 @@ struct nfp_pre_tun_entry {
}
static int
-nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1341,6 +1344,7 @@ struct nfp_pre_tun_entry {
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
const struct rte_vxlan_hdr *hdr;
struct nfp_flower_ipv4_udp_tun *tun4;
struct nfp_flower_ipv6_udp_tun *tun6;
@@ -1369,6 +1373,8 @@ struct nfp_pre_tun_entry {
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = hdr->vx_vni;
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
vxlan_end:
@@ -1378,7 +1384,7 @@ struct nfp_pre_tun_entry {
else
*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
- return 0;
+ return ret;
}
/* Graph of supported items and associated process function */
@@ -2067,7 +2073,7 @@ struct nfp_pre_tun_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -2445,7 +2451,7 @@ struct nfp_pre_tun_entry {
return true;
}
-__rte_unused static int
+static int
nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
uint16_t *index)
{
@@ -2588,6 +2594,49 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
+ __rte_unused const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ uint16_t nfp_mac_idx = 0;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_app_fw_flower *app_fw_flower;
+
+ ret = nfp_pre_tun_table_check_add(repr, &nfp_mac_idx);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Pre tunnel table add failed");
+ return -EINVAL;
+ }
+
+ nfp_mac_idx = (nfp_mac_idx << 8) |
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
+ NFP_TUN_PRE_TUN_IDX_BIT;
+
+ app_fw_flower = repr->app_fw_flower;
+ ret = nfp_flower_cmsg_tun_mac_rule(app_fw_flower, &repr->mac_addr,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send tunnel mac rule failed");
+ return -EINVAL;
+ }
+
+ ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ return -EINVAL;
+ }
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
+ return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
+ else
+ return -ENOTSUP;
+}
+
+static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
@@ -2768,6 +2817,17 @@ struct nfp_pre_tun_entry {
position += sizeof(struct nfp_fl_act_set_tun);
nfp_flow->type = NFP_FLOW_ENCAP;
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ PMD_DRV_LOG(DEBUG, "process RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP");
+ ret = nfp_flow_action_tunnel_decap(representor, action,
+ nfp_flow_meta, nfp_flow);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process tunnel decap");
+ return ret;
+ }
+ nfp_flow->type = NFP_FLOW_DECAP;
+ nfp_flow->install_flag = false;
+ break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
return -ENOTSUP;
@@ -3218,6 +3278,11 @@ struct nfp_pre_tun_entry {
}
switch (tunnel->type) {
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP;
+ *pmd_actions = nfp_action;
+ *num_of_actions = 1;
+ break;
default:
*pmd_actions = NULL;
*num_of_actions = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 10/25] net/nfp: add the offload support of IPv6 VXLAN decap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (8 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 09/25] net/nfp: add the offload support of IPv4 VXLAN decap action Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 11/25] net/nfp: add the offload support of IPv4 GENEVE encap action Chaoyong He
` (16 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv6 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/rel_notes/release_22_11.rst | 2 +-
drivers/net/nfp/nfp_flow.c | 18 ++++++++++++++----
2 files changed, 15 insertions(+), 5 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 351fb02..49e92cc 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -184,7 +184,7 @@ New Features
* Set the port number
* Set the TTL
* Set the DSCP of IPv4 and IPv6
- * Encap of VXLAN tunnel
+ * Encap and decap of VXLAN tunnel
* **Updated NXP dpaa2 driver.**
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 0578679..e212e96 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -487,7 +487,7 @@ struct nfp_pre_tun_entry {
return 0;
}
-__rte_unused static int
+static int
nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
uint8_t ipv6[])
{
@@ -1370,6 +1370,8 @@ struct nfp_pre_tun_entry {
NFP_FLOWER_LAYER2_TUN_IPV6)) {
tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
tun6->tun_id = hdr->vx_vni;
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = hdr->vx_vni;
@@ -2181,7 +2183,7 @@ struct nfp_pre_tun_entry {
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -2595,7 +2597,7 @@ struct nfp_pre_tun_entry {
static int
nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
- __rte_unused const struct rte_flow_action *action,
+ const struct rte_flow_action *action,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
@@ -2613,6 +2615,8 @@ struct nfp_pre_tun_entry {
nfp_mac_idx = (nfp_mac_idx << 8) |
NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
NFP_TUN_PRE_TUN_IDX_BIT;
+ if (action->conf != NULL)
+ nfp_mac_idx |= NFP_TUN_PRE_TUN_IPV6_BIT;
app_fw_flower = repr->app_fw_flower;
ret = nfp_flower_cmsg_tun_mac_rule(app_fw_flower, &repr->mac_addr,
@@ -2633,7 +2637,7 @@ struct nfp_pre_tun_entry {
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
else
- return -ENOTSUP;
+ return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
}
static int
@@ -2827,6 +2831,8 @@ struct nfp_pre_tun_entry {
}
nfp_flow->type = NFP_FLOW_DECAP;
nfp_flow->install_flag = false;
+ if (action->conf != NULL)
+ nfp_flow->tun.payload.v6_flag = 1;
break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
@@ -3277,6 +3283,9 @@ struct nfp_pre_tun_entry {
return -ENOMEM;
}
+ if (tunnel->is_ipv6)
+ nfp_action->conf = (void *)~0;
+
switch (tunnel->type) {
case RTE_FLOW_ITEM_TYPE_VXLAN:
nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP;
@@ -3304,6 +3313,7 @@ struct nfp_pre_tun_entry {
for (i = 0; i < num_of_actions; i++) {
nfp_action = &pmd_actions[i];
+ nfp_action->conf = NULL;
rte_free(nfp_action);
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 11/25] net/nfp: add the offload support of IPv4 GENEVE encap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (9 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 10/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 12/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (15 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 107 +++++++++++++++++++++++++++++++++++++++
2 files changed, 108 insertions(+)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 3b5b052..171b633 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -46,6 +46,7 @@ of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
+raw_encap = Y
port_id = Y
set_ipv4_dscp = Y
set_ipv4_dst = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index e212e96..265c7e8 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -38,6 +38,12 @@ struct vxlan_data {
__VA_ARGS__, RTE_FLOW_ITEM_TYPE_END, \
})
+/* Data length of various conf of raw encap action */
+#define GENEVE_V4_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv4_hdr) + \
+ sizeof(struct rte_udp_hdr) + \
+ sizeof(struct rte_flow_item_geneve))
+
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
/* Bit-mask for fields supported by this PMD. */
@@ -926,6 +932,11 @@ struct nfp_pre_tun_entry {
key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RAW_ENCAP detected");
+ key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
+ key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
+ break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
break;
@@ -2641,6 +2652,88 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint64_t tun_id;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_udp *udp;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_geneve *geneve;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv4 = (const struct rte_flow_item_ipv4 *)(eth + 1);
+ udp = (const struct rte_flow_item_udp *)(ipv4 + 1);
+ geneve = (const struct rte_flow_item_geneve *)(udp + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tun_id = (geneve->vni[0] << 16) | (geneve->vni[1] << 8) | geneve->vni[2];
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GENEVE, tun_id,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_proto = geneve->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv4);
+}
+
+static int
+nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ int ret;
+ size_t act_len;
+ size_t act_pre_size;
+ const struct rte_flow_action_raw_encap *raw_encap;
+
+ raw_encap = action->conf;
+ if (raw_encap->data == NULL) {
+ PMD_DRV_LOG(ERR, "The raw encap action conf is NULL.");
+ return -EINVAL;
+ }
+
+ /* Pre_tunnel action must be the first on action list.
+ * If other actions already exist, they need to be
+ * pushed forward.
+ */
+ act_len = act_data - actions;
+ if (act_len != 0) {
+ act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ memmove(actions + act_pre_size, actions, act_len);
+ }
+
+ switch (raw_encap->size) {
+ case GENEVE_V4_LEN:
+ ret = nfp_flow_action_geneve_encap_v4(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
@@ -2821,6 +2914,20 @@ struct nfp_pre_tun_entry {
position += sizeof(struct nfp_fl_act_set_tun);
nfp_flow->type = NFP_FLOW_ENCAP;
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_RAW_ENCAP");
+ ret = nfp_flow_action_raw_encap(representor->app_fw_flower,
+ position, action_data, action, nfp_flow_meta,
+ &nfp_flow->tun);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process"
+ " RTE_FLOW_ACTION_TYPE_RAW_ENCAP");
+ return ret;
+ }
+ position += sizeof(struct nfp_fl_act_pre_tun);
+ position += sizeof(struct nfp_fl_act_set_tun);
+ nfp_flow->type = NFP_FLOW_ENCAP;
+ break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "process RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP");
ret = nfp_flow_action_tunnel_decap(representor, action,
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 12/25] net/nfp: add the offload support of IPv6 GENEVE encap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (10 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 11/25] net/nfp: add the offload support of IPv4 GENEVE encap action Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 13/25] net/nfp: add the offload support of IPv4 GENEVE item Chaoyong He
` (14 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv6 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/rel_notes/release_22_11.rst | 1 +
drivers/net/nfp/nfp_flow.c | 49 ++++++++++++++++++++++++++++++++++
2 files changed, 50 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 49e92cc..92a0d64 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -185,6 +185,7 @@ New Features
* Set the TTL
* Set the DSCP of IPv4 and IPv6
* Encap and decap of VXLAN tunnel
+ * Encap of GENEVE tunnel
* **Updated NXP dpaa2 driver.**
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 265c7e8..d93883a 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -43,6 +43,10 @@ struct vxlan_data {
sizeof(struct rte_ipv4_hdr) + \
sizeof(struct rte_udp_hdr) + \
sizeof(struct rte_flow_item_geneve))
+#define GENEVE_V6_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv6_hdr) + \
+ sizeof(struct rte_udp_hdr) + \
+ sizeof(struct rte_flow_item_geneve))
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2691,6 +2695,47 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint8_t tos;
+ uint64_t tun_id;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_udp *udp;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_geneve *geneve;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv6 = (const struct rte_flow_item_ipv6 *)(eth + 1);
+ udp = (const struct rte_flow_item_udp *)(ipv6 + 1);
+ geneve = (const struct rte_flow_item_geneve *)(udp + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
+ tun_id = (geneve->vni[0] << 16) | (geneve->vni[1] << 8) | geneve->vni[2];
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GENEVE, tun_id,
+ ipv6->hdr.hop_limits, tos);
+ set_tun->tun_proto = geneve->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv6);
+}
+
+static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2724,6 +2769,10 @@ struct nfp_pre_tun_entry {
ret = nfp_flow_action_geneve_encap_v4(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case GENEVE_V6_LEN:
+ ret = nfp_flow_action_geneve_encap_v6(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 13/25] net/nfp: add the offload support of IPv4 GENEVE item
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (11 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 12/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 14/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (13 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv4 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 75 ++++++++++++++++++++++++++++++++++++++--
2 files changed, 74 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 171b633..4c0d1ab 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -28,6 +28,7 @@ Usage doc = Y
[rte_flow items]
eth = Y
+geneve = Y
ipv4 = Y
ipv6 = Y
ipv6_frag_ext = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index d93883a..da3ac69 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -781,6 +781,23 @@ struct nfp_pre_tun_entry {
return -EINVAL;
}
break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GENEVE detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_GENEVE;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -978,12 +995,22 @@ struct nfp_pre_tun_entry {
static bool
nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
{
+ uint32_t key_layer2;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_meta_tci *meta_tci;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
return true;
+ if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META))
+ return false;
+
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
+ key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GENEVE)
+ return true;
+
return false;
}
@@ -1404,6 +1431,39 @@ struct nfp_pre_tun_entry {
return ret;
}
+static int
+nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ struct nfp_flower_ipv4_udp_tun *tun4;
+ const struct rte_flow_item_geneve *spec;
+ const struct rte_flow_item_geneve *mask;
+ const struct rte_flow_item_geneve *geneve;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge geneve: no item->spec!");
+ goto geneve_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ geneve = is_mask ? mask : spec;
+
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+
+geneve_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ return 0;
+}
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
@@ -1492,7 +1552,8 @@ struct nfp_pre_tun_entry {
.merge = nfp_flow_merge_tcp,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
- .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN),
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_GENEVE),
.mask_support = &(const struct rte_flow_item_udp){
.hdr = {
.src_port = RTE_BE16(0xffff),
@@ -1525,6 +1586,15 @@ struct nfp_pre_tun_entry {
.mask_sz = sizeof(struct rte_flow_item_vxlan),
.merge = nfp_flow_merge_vxlan,
},
+ [RTE_FLOW_ITEM_TYPE_GENEVE] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &(const struct rte_flow_item_geneve){
+ .vni = "\xff\xff\xff",
+ },
+ .mask_default = &rte_flow_item_geneve_mask,
+ .mask_sz = sizeof(struct rte_flow_item_geneve),
+ .merge = nfp_flow_merge_geneve,
+ },
};
static int
@@ -1581,7 +1651,8 @@ struct nfp_pre_tun_entry {
static bool
nfp_flow_is_tun_item(const struct rte_flow_item *item)
{
- if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+ if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
+ item->type == RTE_FLOW_ITEM_TYPE_GENEVE)
return true;
return false;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 14/25] net/nfp: add the offload support of IPv6 GENEVE item
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (12 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 13/25] net/nfp: add the offload support of IPv4 GENEVE item Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 15/25] net/nfp: add the offload support of IPv4 GENEVE decap action Chaoyong He
` (12 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv6 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 43 +++++++++++++++++++++++++++++++++++++------
1 file changed, 37 insertions(+), 6 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index da3ac69..36dbf27 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -783,8 +783,9 @@ struct nfp_pre_tun_entry {
break;
case RTE_FLOW_ITEM_TYPE_GENEVE:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GENEVE detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_GENEVE;
key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
@@ -796,6 +797,17 @@ struct nfp_pre_tun_entry {
* in `struct nfp_flower_ipv4_udp_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for GENEVE tunnel.");
+ return -EINVAL;
}
break;
default:
@@ -1433,7 +1445,7 @@ struct nfp_pre_tun_entry {
static int
nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1441,9 +1453,16 @@ struct nfp_pre_tun_entry {
__rte_unused bool is_outer_layer)
{
struct nfp_flower_ipv4_udp_tun *tun4;
+ struct nfp_flower_ipv6_udp_tun *tun6;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_geneve *spec;
const struct rte_flow_item_geneve *mask;
const struct rte_flow_item_geneve *geneve;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1454,12 +1473,24 @@ struct nfp_pre_tun_entry {
mask = item->mask ? item->mask : proc->mask_default;
geneve = is_mask ? mask : spec;
- tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
- (geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+ tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+ } else {
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+ }
geneve_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
+ } else {
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ }
return 0;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 15/25] net/nfp: add the offload support of IPv4 GENEVE decap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (13 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 14/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 16/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (11 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 16 ++++++++++++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 4c0d1ab..7453109 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -47,6 +47,7 @@ of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
+raw_decap = Y
raw_encap = Y
port_id = Y
set_ipv4_dscp = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 36dbf27..a8287a1 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -973,6 +973,9 @@ struct nfp_pre_tun_entry {
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RAW_DECAP detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1444,7 +1447,7 @@ struct nfp_pre_tun_entry {
}
static int
-nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1452,6 +1455,7 @@ struct nfp_pre_tun_entry {
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
struct nfp_flower_ipv4_udp_tun *tun4;
struct nfp_flower_ipv6_udp_tun *tun6;
struct nfp_flower_meta_tci *meta_tci;
@@ -1482,6 +1486,8 @@ struct nfp_pre_tun_entry {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
(geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
geneve_end:
@@ -1492,7 +1498,7 @@ struct nfp_pre_tun_entry {
*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
}
- return 0;
+ return ret;
}
/* Graph of supported items and associated process function */
@@ -3080,6 +3086,7 @@ struct nfp_pre_tun_entry {
nfp_flow->type = NFP_FLOW_ENCAP;
break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
PMD_DRV_LOG(DEBUG, "process RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP");
ret = nfp_flow_action_tunnel_decap(representor, action,
nfp_flow_meta, nfp_flow);
@@ -3550,6 +3557,11 @@ struct nfp_pre_tun_entry {
*pmd_actions = nfp_action;
*num_of_actions = 1;
break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP;
+ *pmd_actions = nfp_action;
+ *num_of_actions = 1;
+ break;
default:
*pmd_actions = NULL;
*num_of_actions = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 16/25] net/nfp: add the offload support of IPv6 GENEVE decap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (14 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 15/25] net/nfp: add the offload support of IPv4 GENEVE decap action Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 17/25] net/nfp: add the offload support of IPv4 NVGRE encap action Chaoyong He
` (10 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv6 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/rel_notes/release_22_11.rst | 2 +-
drivers/net/nfp/nfp_flow.c | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 92a0d64..4adad3c 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -185,7 +185,7 @@ New Features
* Set the TTL
* Set the DSCP of IPv4 and IPv6
* Encap and decap of VXLAN tunnel
- * Encap of GENEVE tunnel
+ * Encap and decap of GENEVE tunnel
* **Updated NXP dpaa2 driver.**
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index a8287a1..f42cf77 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1482,6 +1482,8 @@ struct nfp_pre_tun_entry {
tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
(geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 17/25] net/nfp: add the offload support of IPv4 NVGRE encap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (15 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 16/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 18/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (9 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action of IPv4 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 43 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 44 insertions(+)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 7453109..5a3d0a8 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -43,6 +43,7 @@ vxlan = Y
count = Y
dec_ttl = Y
drop = Y
+nvgre_encap = Y
of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index f42cf77..823b02a 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -47,6 +47,10 @@ struct vxlan_data {
sizeof(struct rte_ipv6_hdr) + \
sizeof(struct rte_udp_hdr) + \
sizeof(struct rte_flow_item_geneve))
+#define NVGRE_V4_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv4_hdr) + \
+ sizeof(struct rte_flow_item_gre) + \
+ sizeof(rte_be32_t)) /* gre key */
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2846,6 +2850,41 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_gre *gre;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv4 = (const struct rte_flow_item_ipv4 *)(eth + 1);
+ gre = (const struct rte_flow_item_gre *)(ipv4 + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_proto = gre->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv4);
+}
+
+static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2883,6 +2922,10 @@ struct nfp_pre_tun_entry {
ret = nfp_flow_action_geneve_encap_v6(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case NVGRE_V4_LEN:
+ ret = nfp_flow_action_nvgre_encap_v4(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 18/25] net/nfp: add the offload support of IPv6 NVGRE encap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (16 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 17/25] net/nfp: add the offload support of IPv4 NVGRE encap action Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 19/25] net/nfp: prepare for the decap action of IPv4 GRE tunnel Chaoyong He
` (8 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action of IPv6 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/rel_notes/release_22_11.rst | 1 +
drivers/net/nfp/nfp_flow.c | 45 ++++++++++++++++++++++++++++++++++
2 files changed, 46 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index 4adad3c..e2f7295 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -186,6 +186,7 @@ New Features
* Set the DSCP of IPv4 and IPv6
* Encap and decap of VXLAN tunnel
* Encap and decap of GENEVE tunnel
+ * Encap of NVGRE tunnel
* **Updated NXP dpaa2 driver.**
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 823b02a..adba6c3 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -51,6 +51,10 @@ struct vxlan_data {
sizeof(struct rte_ipv4_hdr) + \
sizeof(struct rte_flow_item_gre) + \
sizeof(rte_be32_t)) /* gre key */
+#define NVGRE_V6_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv6_hdr) + \
+ sizeof(struct rte_flow_item_gre) + \
+ sizeof(rte_be32_t)) /* gre key */
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2885,6 +2889,43 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint8_t tos;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_gre *gre;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv6 = (const struct rte_flow_item_ipv6 *)(eth + 1);
+ gre = (const struct rte_flow_item_gre *)(ipv6 + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
+ ipv6->hdr.hop_limits, tos);
+ set_tun->tun_proto = gre->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv6);
+}
+
+static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2926,6 +2967,10 @@ struct nfp_pre_tun_entry {
ret = nfp_flow_action_nvgre_encap_v4(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case NVGRE_V6_LEN:
+ ret = nfp_flow_action_nvgre_encap_v6(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 19/25] net/nfp: prepare for the decap action of IPv4 GRE tunnel
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (17 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 18/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 20/25] net/nfp: prepare for the decap action of IPv6 " Chaoyong He
` (7 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and logics, prepare for
the decap action of IPv4 GRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 29 +++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 40 +++++++++++++++++++++++++-------
drivers/net/nfp/nfp_flow.h | 3 +++
3 files changed, 63 insertions(+), 9 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 61f2f83..8bca7c2 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -575,6 +575,35 @@ struct nfp_flower_ipv6_udp_tun {
rte_be32_t tun_id;
};
+/*
+ * Flow Frame GRE TUNNEL --> Tunnel details (6W/24B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv4_gre_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ rte_be16_t tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be16_t reserved1;
+ rte_be16_t ethertype;
+ rte_be32_t tun_key;
+ rte_be32_t reserved2;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index adba6c3..0c0e321 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -575,6 +575,7 @@ struct nfp_pre_tun_entry {
uint32_t key_layer2 = 0;
struct nfp_flower_ipv4_udp_tun *udp4;
struct nfp_flower_ipv6_udp_tun *udp6;
+ struct nfp_flower_ipv4_gre_tun *gre4;
struct nfp_flower_meta_tci *meta_tci;
struct nfp_flower_ext_meta *ext_meta = NULL;
@@ -590,9 +591,15 @@ struct nfp_pre_tun_entry {
sizeof(struct nfp_flower_ipv6_udp_tun));
ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
} else {
- udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv4_udp_tun));
- ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+ gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_gre_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst);
+ } else {
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ }
}
return ret;
@@ -1031,7 +1038,7 @@ struct nfp_pre_tun_entry {
ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
- if (key_layer2 & NFP_FLOWER_LAYER2_GENEVE)
+ if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE))
return true;
return false;
@@ -1120,11 +1127,15 @@ struct nfp_pre_tun_entry {
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv4 *spec;
const struct rte_flow_item_ipv4 *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun;
+ struct nfp_flower_ipv4_gre_tun *ipv4_gre_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
if (spec == NULL) {
@@ -1133,12 +1144,23 @@ struct nfp_pre_tun_entry {
}
hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
- ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
- ipv4_udp_tun->ipv4.src = hdr->src_addr;
- ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_GRE)) {
+ ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+
+ ipv4_gre_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_gre_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_gre_tun->ipv4.src = hdr->src_addr;
+ ipv4_gre_tun->ipv4.dst = hdr->dst_addr;
+ } else {
+ ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+
+ ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_udp_tun->ipv4.src = hdr->src_addr;
+ ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ }
} else {
if (spec == NULL) {
PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 1b4a51f..e879283 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -48,6 +48,9 @@
#define NFP_FL_SC_ACT_POPV 0x6A000000
#define NFP_FL_SC_ACT_NULL 0x00000000
+/* GRE Tunnel flags */
+#define NFP_FL_GRE_FLAG_KEY (1 << 2)
+
/* Action opcodes */
#define NFP_FL_ACTION_OPCODE_OUTPUT 0
#define NFP_FL_ACTION_OPCODE_PUSH_VLAN 1
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 20/25] net/nfp: prepare for the decap action of IPv6 GRE tunnel
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (18 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 19/25] net/nfp: prepare for the decap action of IPv4 GRE tunnel Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 21/25] net/nfp: add the offload support of IPv4 NVGRE item Chaoyong He
` (6 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and logics, prepare for
the decap action of IPv6 GRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 41 ++++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 49 ++++++++++++++++++++++++--------
2 files changed, 78 insertions(+), 12 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 8bca7c2..a48da67 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -604,6 +604,47 @@ struct nfp_flower_ipv4_gre_tun {
rte_be32_t reserved2;
};
+/*
+ * Flow Frame GRE TUNNEL V6 --> Tunnel details (12W/48B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv6_gre_tun {
+ struct nfp_flower_tun_ipv6 ipv6;
+ rte_be16_t tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be16_t reserved1;
+ rte_be16_t ethertype;
+ rte_be32_t tun_key;
+ rte_be32_t reserved2;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 0c0e321..3f06657 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -576,6 +576,7 @@ struct nfp_pre_tun_entry {
struct nfp_flower_ipv4_udp_tun *udp4;
struct nfp_flower_ipv6_udp_tun *udp6;
struct nfp_flower_ipv4_gre_tun *gre4;
+ struct nfp_flower_ipv6_gre_tun *gre6;
struct nfp_flower_meta_tci *meta_tci;
struct nfp_flower_ext_meta *ext_meta = NULL;
@@ -587,9 +588,15 @@ struct nfp_pre_tun_entry {
key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
- udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv6_udp_tun));
- ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+ gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_gre_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst);
+ } else {
+ udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_udp_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ }
} else {
if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
@@ -1204,11 +1211,15 @@ struct nfp_pre_tun_entry {
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv6 *spec;
const struct rte_flow_item_ipv6 *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
struct nfp_flower_ipv6_udp_tun *ipv6_udp_tun;
+ struct nfp_flower_ipv6_gre_tun *ipv6_gre_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
if (spec == NULL) {
@@ -1217,15 +1228,29 @@ struct nfp_pre_tun_entry {
}
hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
-
- ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
- RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
- ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
- memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
- sizeof(ipv6_udp_tun->ipv6.ipv6_src));
- memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
- sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_GRE)) {
+ ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+
+ ipv6_gre_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_gre_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_gre_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_gre_tun->ipv6.ipv6_src));
+ memcpy(ipv6_gre_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_gre_tun->ipv6.ipv6_dst));
+ } else {
+ ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+
+ ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_src));
+ memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+ }
} else {
if (spec == NULL) {
PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 21/25] net/nfp: add the offload support of IPv4 NVGRE item
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (19 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 20/25] net/nfp: prepare for the decap action of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 22/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (5 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv4 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 2 +
drivers/net/nfp/nfp_flow.c | 99 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 99 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 5a3d0a8..4c6d2d5 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -29,6 +29,8 @@ Usage doc = Y
[rte_flow items]
eth = Y
geneve = Y
+gre = Y
+gre_key = Y
ipv4 = Y
ipv6 = Y
ipv6_frag_ext = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 3f06657..94d7a27 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -832,6 +832,26 @@ struct nfp_pre_tun_entry {
return -EINVAL;
}
break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_GRE;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GRE;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_gre_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE_KEY:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE_KEY detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -1558,6 +1578,62 @@ struct nfp_pre_tun_entry {
return ret;
}
+static int
+nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ __rte_unused const struct rte_flow_item *item,
+ __rte_unused const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ struct nfp_flower_ipv4_gre_tun *tun4;
+
+ /* NVGRE is the only supported GRE tunnel type */
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun4->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun4->ethertype = rte_cpu_to_be_16(0x6558);
+
+ return 0;
+}
+
+static int
+nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ __rte_unused const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ rte_be32_t tun_key;
+ const rte_be32_t *spec;
+ const rte_be32_t *mask;
+ struct nfp_flower_ipv4_gre_tun *tun4;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge gre key: no item->spec!");
+ goto gre_key_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ tun_key = is_mask ? *mask : *spec;
+
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ tun4->tun_key = tun_key;
+ tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+
+gre_key_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
+
+ return 0;
+}
+
+const rte_be32_t nfp_flow_item_gre_key = 0xffffffff;
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
@@ -1598,7 +1674,8 @@ struct nfp_pre_tun_entry {
[RTE_FLOW_ITEM_TYPE_IPV4] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_TCP,
RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_SCTP),
+ RTE_FLOW_ITEM_TYPE_SCTP,
+ RTE_FLOW_ITEM_TYPE_GRE),
.mask_support = &(const struct rte_flow_item_ipv4){
.hdr = {
.type_of_service = 0xff,
@@ -1689,6 +1766,23 @@ struct nfp_pre_tun_entry {
.mask_sz = sizeof(struct rte_flow_item_geneve),
.merge = nfp_flow_merge_geneve,
},
+ [RTE_FLOW_ITEM_TYPE_GRE] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
+ .mask_support = &(const struct rte_flow_item_gre){
+ .c_rsvd0_ver = RTE_BE16(0xa000),
+ .protocol = RTE_BE16(0xffff),
+ },
+ .mask_default = &rte_flow_item_gre_mask,
+ .mask_sz = sizeof(struct rte_flow_item_gre),
+ .merge = nfp_flow_merge_gre,
+ },
+ [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &nfp_flow_item_gre_key,
+ .mask_default = &nfp_flow_item_gre_key,
+ .mask_sz = sizeof(rte_be32_t),
+ .merge = nfp_flow_merge_gre_key,
+ },
};
static int
@@ -1746,7 +1840,8 @@ struct nfp_pre_tun_entry {
nfp_flow_is_tun_item(const struct rte_flow_item *item)
{
if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
- item->type == RTE_FLOW_ITEM_TYPE_GENEVE)
+ item->type == RTE_FLOW_ITEM_TYPE_GENEVE ||
+ item->type == RTE_FLOW_ITEM_TYPE_GRE_KEY)
return true;
return false;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 22/25] net/nfp: add the offload support of IPv6 NVGRE item
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (20 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 21/25] net/nfp: add the offload support of IPv4 NVGRE item Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 23/25] net/nfp: add the offload support of IPv4 NVGRE decap action Chaoyong He
` (4 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv6 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 73 +++++++++++++++++++++++++++++++++++++---------
1 file changed, 59 insertions(+), 14 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 94d7a27..9464211 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -834,8 +834,9 @@ struct nfp_pre_tun_entry {
break;
case RTE_FLOW_ITEM_TYPE_GRE:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_GRE;
key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GRE;
@@ -847,6 +848,17 @@ struct nfp_pre_tun_entry {
* in `struct nfp_flower_ipv4_gre_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_gre_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_gre_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for GRE tunnel.");
+ return -1;
}
break;
case RTE_FLOW_ITEM_TYPE_GRE_KEY:
@@ -1580,38 +1592,59 @@ struct nfp_pre_tun_entry {
static int
nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
__rte_unused const struct rte_flow_item *item,
__rte_unused const struct nfp_flow_item_proc *proc,
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_ipv4_gre_tun *tun4;
+ struct nfp_flower_ipv6_gre_tun *tun6;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
/* NVGRE is the only supported GRE tunnel type */
- tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
- if (is_mask)
- tun4->ethertype = rte_cpu_to_be_16(~0);
- else
- tun4->ethertype = rte_cpu_to_be_16(0x6558);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6) {
+ tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun6->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun6->ethertype = rte_cpu_to_be_16(0x6558);
+ } else {
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun4->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun4->ethertype = rte_cpu_to_be_16(0x6558);
+ }
return 0;
}
static int
nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
- __rte_unused const struct nfp_flow_item_proc *proc,
+ const struct nfp_flow_item_proc *proc,
bool is_mask,
__rte_unused bool is_outer_layer)
{
rte_be32_t tun_key;
const rte_be32_t *spec;
const rte_be32_t *mask;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_ipv4_gre_tun *tun4;
+ struct nfp_flower_ipv6_gre_tun *tun6;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1622,12 +1655,23 @@ struct nfp_pre_tun_entry {
mask = item->mask ? item->mask : proc->mask_default;
tun_key = is_mask ? *mask : *spec;
- tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
- tun4->tun_key = tun_key;
- tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6) {
+ tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+ tun6->tun_key = tun_key;
+ tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ } else {
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ tun4->tun_key = tun_key;
+ tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ }
gre_key_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun);
+ else
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
return 0;
}
@@ -1693,7 +1737,8 @@ struct nfp_pre_tun_entry {
[RTE_FLOW_ITEM_TYPE_IPV6] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_TCP,
RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_SCTP),
+ RTE_FLOW_ITEM_TYPE_SCTP,
+ RTE_FLOW_ITEM_TYPE_GRE),
.mask_support = &(const struct rte_flow_item_ipv6){
.hdr = {
.vtc_flow = RTE_BE32(0x0ff00000),
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 23/25] net/nfp: add the offload support of IPv4 NVGRE decap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (21 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 22/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 24/25] net/nfp: add the offload support of IPv6 " Chaoyong He
` (3 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action of IPv4 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 8 ++++++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 4c6d2d5..9f7a680 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -45,6 +45,7 @@ vxlan = Y
count = Y
dec_ttl = Y
drop = Y
+nvgre_decap = Y
nvgre_encap = Y
of_pop_vlan = Y
of_push_vlan = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 9464211..514e221 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1627,7 +1627,7 @@ struct nfp_pre_tun_entry {
}
static int
-nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1635,6 +1635,7 @@ struct nfp_pre_tun_entry {
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
rte_be32_t tun_key;
const rte_be32_t *spec;
const rte_be32_t *mask;
@@ -1664,6 +1665,8 @@ struct nfp_pre_tun_entry {
tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
tun4->tun_key = tun_key;
tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
gre_key_end:
@@ -1673,7 +1676,7 @@ struct nfp_pre_tun_entry {
else
*mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
- return 0;
+ return ret;
}
const rte_be32_t nfp_flow_item_gre_key = 0xffffffff;
@@ -3835,6 +3838,7 @@ struct nfp_pre_tun_entry {
*num_of_actions = 1;
break;
case RTE_FLOW_ITEM_TYPE_GENEVE:
+ case RTE_FLOW_ITEM_TYPE_GRE:
nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP;
*pmd_actions = nfp_action;
*num_of_actions = 1;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 24/25] net/nfp: add the offload support of IPv6 NVGRE decap action
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (22 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 23/25] net/nfp: add the offload support of IPv4 NVGRE decap action Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-18 3:22 ` [PATCH 25/25] net/nfp: add the support of new tunnel solution Chaoyong He
` (2 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action of IPv6 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/rel_notes/release_22_11.rst | 2 +-
drivers/net/nfp/nfp_flow.c | 2 ++
2 files changed, 3 insertions(+), 1 deletion(-)
diff --git a/doc/guides/rel_notes/release_22_11.rst b/doc/guides/rel_notes/release_22_11.rst
index e2f7295..4f3edab 100644
--- a/doc/guides/rel_notes/release_22_11.rst
+++ b/doc/guides/rel_notes/release_22_11.rst
@@ -186,7 +186,7 @@ New Features
* Set the DSCP of IPv4 and IPv6
* Encap and decap of VXLAN tunnel
* Encap and decap of GENEVE tunnel
- * Encap of NVGRE tunnel
+ * Encap and decap of NVGRE tunnel
* **Updated NXP dpaa2 driver.**
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 514e221..1ccc6ef 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1661,6 +1661,8 @@ struct nfp_pre_tun_entry {
tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
tun6->tun_key = tun_key;
tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
tun4->tun_key = tun_key;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH 25/25] net/nfp: add the support of new tunnel solution
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (23 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 24/25] net/nfp: add the offload support of IPv6 " Chaoyong He
@ 2022-10-18 3:22 ` Chaoyong He
2022-10-21 13:37 ` [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-18 3:22 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
The new version of flower firmware application add the support of
a new tunnel solution.
It changes the structure of tunnel neighbor, and use a feature flag
to indicate which tunnel solution is used.
Add the logic of read extra features from firmware, and store it in
the app private structure.
Adjust the data structure and related logic to make the PMD support
both version of tunnel solutions.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower.c | 14 ++++
drivers/net/nfp/flower/nfp_flower.h | 24 +++++++
drivers/net/nfp/flower/nfp_flower_cmsg.c | 4 ++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 17 +++++
drivers/net/nfp/nfp_flow.c | 118 +++++++++++++++++++++++++------
5 files changed, 157 insertions(+), 20 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 168bf0c..d2a8f9b 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -1073,6 +1073,8 @@
nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
{
int ret;
+ int err;
+ uint64_t ext_features;
unsigned int numa_node;
struct nfp_net_hw *pf_hw;
struct nfp_net_hw *ctrl_hw;
@@ -1114,6 +1116,18 @@
goto vnic_cleanup;
}
+ /* Read the extra features */
+ ext_features = nfp_rtsym_read_le(pf_dev->sym_tbl, "_abi_flower_extra_features",
+ &err);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Couldn't read extra features from fw");
+ ret = -EIO;
+ goto pf_cpp_area_cleanup;
+ }
+
+ /* Store the extra features */
+ app_fw_flower->ext_features = ext_features;
+
/* Fill in the PF vNIC and populate app struct */
app_fw_flower->pf_hw = pf_hw;
pf_hw->ctrl_bar = pf_dev->ctrl_bar;
diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index b90391c..dc0eb36 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -6,6 +6,21 @@
#ifndef _NFP_FLOWER_H_
#define _NFP_FLOWER_H_
+/* Extra features bitmap. */
+#define NFP_FL_FEATS_GENEVE RTE_BIT64(0)
+#define NFP_FL_NBI_MTU_SETTING RTE_BIT64(1)
+#define NFP_FL_FEATS_GENEVE_OPT RTE_BIT64(2)
+#define NFP_FL_FEATS_VLAN_PCP RTE_BIT64(3)
+#define NFP_FL_FEATS_VF_RLIM RTE_BIT64(4)
+#define NFP_FL_FEATS_FLOW_MOD RTE_BIT64(5)
+#define NFP_FL_FEATS_PRE_TUN_RULES RTE_BIT64(6)
+#define NFP_FL_FEATS_IPV6_TUN RTE_BIT64(7)
+#define NFP_FL_FEATS_VLAN_QINQ RTE_BIT64(8)
+#define NFP_FL_FEATS_QOS_PPS RTE_BIT64(9)
+#define NFP_FL_FEATS_QOS_METER RTE_BIT64(10)
+#define NFP_FL_FEATS_DECAP_V2 RTE_BIT64(11)
+#define NFP_FL_FEATS_HOST_ACK RTE_BIT64(31)
+
/*
* Flower fallback and ctrl path always adds and removes
* 8 bytes of prepended data. Tx descriptors must point
@@ -52,9 +67,18 @@ struct nfp_app_fw_flower {
/* PF representor */
struct nfp_flower_representor *pf_repr;
+ /* Flower extra features */
+ uint64_t ext_features;
+
struct nfp_flow_priv *flow_priv;
};
+static inline bool
+nfp_flower_support_decap_v2(const struct nfp_app_fw_flower *app_fw_flower)
+{
+ return app_fw_flower->ext_features & NFP_FL_FEATS_DECAP_V2;
+}
+
int nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev);
int nfp_secondary_init_app_fw_flower(struct nfp_cpp *cpp);
uint16_t nfp_flower_pf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 76815cf..babdd8e 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -263,6 +263,8 @@
}
msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v4);
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ msg_len -= sizeof(struct nfp_flower_tun_neigh_ext);
msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH, msg_len);
memcpy(msg, payload, msg_len);
@@ -292,6 +294,8 @@
}
msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v6);
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ msg_len -= sizeof(struct nfp_flower_tun_neigh_ext);
msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6, msg_len);
memcpy(msg, payload, msg_len);
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index a48da67..04601cb 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -136,6 +136,21 @@ struct nfp_flower_tun_neigh {
};
/*
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | VLAN_TPID | VLAN_ID |
+ * +---------------------------------------------------------------+
+ * 1 | HOST_CTX |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_tun_neigh_ext {
+ rte_be16_t vlan_tpid;
+ rte_be16_t vlan_tci;
+ rte_be32_t host_ctx;
+};
+
+/*
* NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V4
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
* -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
@@ -157,6 +172,7 @@ struct nfp_flower_cmsg_tun_neigh_v4 {
rte_be32_t dst_ipv4;
rte_be32_t src_ipv4;
struct nfp_flower_tun_neigh common;
+ struct nfp_flower_tun_neigh_ext ext;
};
/*
@@ -193,6 +209,7 @@ struct nfp_flower_cmsg_tun_neigh_v6 {
uint8_t dst_ipv6[16];
uint8_t src_ipv6[16];
struct nfp_flower_tun_neigh common;
+ struct nfp_flower_tun_neigh_ext ext;
};
#define NFP_TUN_PRE_TUN_RULE_DEL (1 << 0)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 1ccc6ef..ac60e90 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -2402,8 +2402,10 @@ struct nfp_pre_tun_entry {
static int
nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
+ bool exists = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
struct nfp_flow_priv *priv;
@@ -2437,11 +2439,17 @@ struct nfp_pre_tun_entry {
LIST_FOREACH(tmp, &priv->nn_list, next) {
if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
tmp->ref_cnt++;
- return 0;
+ exists = true;
+ break;
}
}
- LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ if (exists) {
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ return 0;
+ } else {
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ }
memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
payload.dst_ipv4 = ipv4->ipv4_src;
@@ -2450,6 +2458,17 @@ struct nfp_pre_tun_entry {
memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
payload.common.port_id = port->in_port;
+ if (nfp_flower_support_decap_v2(app_fw_flower)) {
+ if (meta_tci->tci != 0) {
+ payload.ext.vlan_tci = meta_tci->tci;
+ payload.ext.vlan_tpid = 0x88a8;
+ } else {
+ payload.ext.vlan_tci = 0xffff;
+ payload.ext.vlan_tpid = 0xffff;
+ }
+ payload.ext.host_ctx = nfp_flow_meta->host_ctx_id;
+ }
+
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
@@ -2510,8 +2529,10 @@ struct nfp_pre_tun_entry {
static int
nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
+ bool exists = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
struct nfp_flow_priv *priv;
@@ -2545,11 +2566,17 @@ struct nfp_pre_tun_entry {
LIST_FOREACH(tmp, &priv->nn_list, next) {
if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
tmp->ref_cnt++;
- return 0;
+ exists = true;
+ break;
}
}
- LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ if (exists) {
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ return 0;
+ } else {
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ }
memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
memcpy(payload.dst_ipv6, ipv6->ipv6_src, sizeof(payload.dst_ipv6));
@@ -2558,6 +2585,17 @@ struct nfp_pre_tun_entry {
memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
payload.common.port_id = port->in_port;
+ if (nfp_flower_support_decap_v2(app_fw_flower)) {
+ if (meta_tci->tci != 0) {
+ payload.ext.vlan_tci = meta_tci->tci;
+ payload.ext.vlan_tpid = 0x88a8;
+ } else {
+ payload.ext.vlan_tci = 0xffff;
+ payload.ext.vlan_tpid = 0xffff;
+ }
+ payload.ext.host_ctx = nfp_flow_meta->host_ctx_id;
+ }
+
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
@@ -2575,12 +2613,14 @@ struct nfp_pre_tun_entry {
static int
nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
- struct rte_flow *nfp_flow)
+ struct rte_flow *nfp_flow,
+ bool decap_flag)
{
int ret;
bool flag = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
+ struct nfp_flower_in_port *port;
tun = &nfp_flow->tun;
LIST_FOREACH(tmp, &app_fw_flower->flow_priv->nn_list, next) {
@@ -2608,6 +2648,40 @@ struct nfp_pre_tun_entry {
}
}
+ if (!decap_flag)
+ return 0;
+
+ port = (struct nfp_flower_in_port *)(nfp_flow->payload.unmasked_data +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ if (tmp->payload.v6_flag != 0) {
+ struct nfp_flower_cmsg_tun_neigh_v6 nn_v6;
+ memset(&nn_v6, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(nn_v6.dst_ipv6, tmp->payload.dst.dst_ipv6, sizeof(nn_v6.dst_ipv6));
+ memcpy(nn_v6.src_ipv6, tmp->payload.src.src_ipv6, sizeof(nn_v6.src_ipv6));
+ memcpy(nn_v6.common.dst_mac, tmp->payload.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(nn_v6.common.src_mac, tmp->payload.src_addr, RTE_ETHER_ADDR_LEN);
+ nn_v6.common.port_id = port->in_port;
+
+ ret = nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &nn_v6);
+ } else {
+ struct nfp_flower_cmsg_tun_neigh_v4 nn_v4;
+ memset(&nn_v4, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ nn_v4.dst_ipv4 = tmp->payload.dst.dst_ipv4;
+ nn_v4.src_ipv4 = tmp->payload.src.src_ipv4;
+ memcpy(nn_v4.common.dst_mac, tmp->payload.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(nn_v4.common.src_mac, tmp->payload.src_addr, RTE_ETHER_ADDR_LEN);
+ nn_v4.common.port_id = port->in_port;
+
+ ret = nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &nn_v4);
+ }
+
+ if (ret != 0) {
+ PMD_DRV_LOG(DEBUG, "Failed to send the nn entry");
+ return -EINVAL;
+ }
+
return 0;
}
@@ -2895,12 +2969,14 @@ struct nfp_pre_tun_entry {
goto free_entry;
}
- ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
- nfp_mac_idx, true);
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
- ret = -EINVAL;
- goto free_entry;
+ if (!nfp_flower_support_decap_v2(repr->app_fw_flower)) {
+ ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
}
find_entry->ref_cnt = 1U;
@@ -2951,18 +3027,20 @@ struct nfp_pre_tun_entry {
return -EINVAL;
}
- ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
- nfp_mac_idx, false);
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
- return -EINVAL;
+ if (!nfp_flower_support_decap_v2(app_fw_flower)) {
+ ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ return -EINVAL;
+ }
}
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
- return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
+ return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
else
- return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
+ return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
}
static int
@@ -3662,11 +3740,11 @@ struct nfp_pre_tun_entry {
break;
case NFP_FLOW_ENCAP:
/* Delete the entry from nn table */
- ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow, false);
break;
case NFP_FLOW_DECAP:
/* Delete the entry from nn table */
- ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow, true);
if (ret != 0)
goto exit;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 00/25] add the extend rte_flow offload support of nfp PMD
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (24 preceding siblings ...)
2022-10-18 3:22 ` [PATCH 25/25] net/nfp: add the support of new tunnel solution Chaoyong He
@ 2022-10-21 13:37 ` Ferruh Yigit
2022-10-21 13:39 ` Ferruh Yigit
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
26 siblings, 1 reply; 88+ messages in thread
From: Ferruh Yigit @ 2022-10-21 13:37 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund
On 10/18/2022 4:22 AM, Chaoyong He wrote:
> This is the third patch series to add the support of rte_flow offload for
> nfp PMD, includes:
> Add the offload support of decap/encap of VXLAN
> Add the offload support of decap/encap of GENEVE
> Add the offload support of decap/encap of NVGRE
>
> Depends-on: series-25268 ("add the basic rte_flow offload support of nfp PMD")
>
> Chaoyong He (25):
> net/nfp: add the offload support of IPv4 VXLAN item
> net/nfp: add the offload support of IPv6 VXLAN item
> net/nfp: prepare for the encap action of IPv4 tunnel
> net/nfp: prepare for the encap action of IPv6 tunnel
> net/nfp: add the offload support of IPv4 VXLAN encap action
> net/nfp: add the offload support of IPv6 VXLAN encap action
> net/nfp: prepare for the decap action of IPv4 UDP tunnel
> net/nfp: prepare for the decap action of IPv6 UDP tunnel
> net/nfp: add the offload support of IPv4 VXLAN decap action
> net/nfp: add the offload support of IPv6 VXLAN decap action
> net/nfp: add the offload support of IPv4 GENEVE encap action
> net/nfp: add the offload support of IPv6 GENEVE encap action
> net/nfp: add the offload support of IPv4 GENEVE item
> net/nfp: add the offload support of IPv6 GENEVE item
> net/nfp: add the offload support of IPv4 GENEVE decap action
> net/nfp: add the offload support of IPv6 GENEVE decap action
> net/nfp: add the offload support of IPv4 NVGRE encap action
> net/nfp: add the offload support of IPv6 NVGRE encap action
> net/nfp: prepare for the decap action of IPv4 GRE tunnel
> net/nfp: prepare for the decap action of IPv6 GRE tunnel
> net/nfp: add the offload support of IPv4 NVGRE item
> net/nfp: add the offload support of IPv6 NVGRE item
> net/nfp: add the offload support of IPv4 NVGRE decap action
> net/nfp: add the offload support of IPv6 NVGRE decap action
> net/nfp: add the support of new tunnel solution
Can you please rebase the set on top of latest head?
Also please adjust the commit log/title in a consistent way with
previous set.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH 00/25] add the extend rte_flow offload support of nfp PMD
2022-10-21 13:37 ` [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
@ 2022-10-21 13:39 ` Ferruh Yigit
0 siblings, 0 replies; 88+ messages in thread
From: Ferruh Yigit @ 2022-10-21 13:39 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund
On 10/21/2022 2:37 PM, Ferruh Yigit wrote:
> On 10/18/2022 4:22 AM, Chaoyong He wrote:
>> This is the third patch series to add the support of rte_flow offload for
>> nfp PMD, includes:
>> Add the offload support of decap/encap of VXLAN
>> Add the offload support of decap/encap of GENEVE
>> Add the offload support of decap/encap of NVGRE
>>
>> Depends-on: series-25268 ("add the basic rte_flow offload support of
>> nfp PMD")
>>
>> Chaoyong He (25):
>> net/nfp: add the offload support of IPv4 VXLAN item
>> net/nfp: add the offload support of IPv6 VXLAN item
>> net/nfp: prepare for the encap action of IPv4 tunnel
>> net/nfp: prepare for the encap action of IPv6 tunnel
>> net/nfp: add the offload support of IPv4 VXLAN encap action
>> net/nfp: add the offload support of IPv6 VXLAN encap action
>> net/nfp: prepare for the decap action of IPv4 UDP tunnel
>> net/nfp: prepare for the decap action of IPv6 UDP tunnel
>> net/nfp: add the offload support of IPv4 VXLAN decap action
>> net/nfp: add the offload support of IPv6 VXLAN decap action
>> net/nfp: add the offload support of IPv4 GENEVE encap action
>> net/nfp: add the offload support of IPv6 GENEVE encap action
>> net/nfp: add the offload support of IPv4 GENEVE item
>> net/nfp: add the offload support of IPv6 GENEVE item
>> net/nfp: add the offload support of IPv4 GENEVE decap action
>> net/nfp: add the offload support of IPv6 GENEVE decap action
>> net/nfp: add the offload support of IPv4 NVGRE encap action
>> net/nfp: add the offload support of IPv6 NVGRE encap action
>> net/nfp: prepare for the decap action of IPv4 GRE tunnel
>> net/nfp: prepare for the decap action of IPv6 GRE tunnel
>> net/nfp: add the offload support of IPv4 NVGRE item
>> net/nfp: add the offload support of IPv6 NVGRE item
>> net/nfp: add the offload support of IPv4 NVGRE decap action
>> net/nfp: add the offload support of IPv6 NVGRE decap action
>> net/nfp: add the support of new tunnel solution
>
> Can you please rebase the set on top of latest head?
>
> Also please adjust the commit log/title in a consistent way with
> previous set.
Can you please run `./devtools/check-doc-vs-code.sh` script and be sure
it is not listing any mismatch in next version?
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
` (25 preceding siblings ...)
2022-10-21 13:37 ` [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 01/25] net/nfp: support IPv4 VXLAN flow item Chaoyong He
` (26 more replies)
26 siblings, 27 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
This is the third patch series to add the support of rte_flow offload for
nfp PMD, includes:
Add the offload support of decap/encap of VXLAN
Add the offload support of decap/encap of GENEVE
Add the offload support of decap/encap of NVGRE
Changes since v1
- Delete the modificaiton of release note.
- Modify the commit title.
- Rebase to the lastest logic.
Chaoyong He (25):
net/nfp: support IPv4 VXLAN flow item
net/nfp: support IPv6 VXLAN flow item
net/nfp: prepare for IPv4 tunnel encap flow action
net/nfp: prepare for IPv6 tunnel encap flow action
net/nfp: support IPv4 VXLAN encap flow action
net/nfp: support IPv6 VXLAN encap flow action
net/nfp: prepare for IPv4 UDP tunnel decap flow action
net/nfp: prepare for IPv6 UDP tunnel decap flow action
net/nfp: support IPv4 VXLAN decap flow action
net/nfp: support IPv6 VXLAN decap flow action
net/nfp: support IPv4 GENEVE encap flow action
net/nfp: support IPv6 GENEVE encap flow action
net/nfp: support IPv4 GENEVE flow item
net/nfp: support IPv6 GENEVE flow item
net/nfp: support IPv4 GENEVE decap flow action
net/nfp: support IPv6 GENEVE decap flow action
net/nfp: support IPv4 NVGRE encap flow action
net/nfp: support IPv6 NVGRE encap flow action
net/nfp: prepare for IPv4 GRE tunnel decap flow action
net/nfp: prepare for IPv6 GRE tunnel decap flow action
net/nfp: support IPv4 NVGRE flow item
net/nfp: support IPv6 NVGRE flow item
net/nfp: support IPv4 NVGRE decap flow action
net/nfp: support IPv6 NVGRE decap flow action
net/nfp: support new tunnel solution
doc/guides/nics/features/nfp.ini | 10 +
drivers/net/nfp/flower/nfp_flower.c | 14 +
drivers/net/nfp/flower/nfp_flower.h | 24 +
drivers/net/nfp/flower/nfp_flower_cmsg.c | 222 ++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 415 +++++++
drivers/net/nfp/nfp_flow.c | 2003 +++++++++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 56 +
7 files changed, 2675 insertions(+), 69 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 01/25] net/nfp: support IPv4 VXLAN flow item
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 02/25] net/nfp: support IPv6 " Chaoyong He
` (25 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding data structure and logics, to support
the offload of IPv4 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/flower/nfp_flower_cmsg.h | 35 +++++
drivers/net/nfp/nfp_flow.c | 243 ++++++++++++++++++++++++++-----
3 files changed, 246 insertions(+), 33 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 0184980..faaa7da 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -35,6 +35,7 @@ sctp = Y
tcp = Y
udp = Y
vlan = Y
+vxlan = Y
[rte_flow actions]
count = Y
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 6bf8ff7..08e2873 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -324,6 +324,41 @@ struct nfp_flower_ipv6 {
uint8_t ipv6_dst[16];
};
+struct nfp_flower_tun_ipv4 {
+ rte_be32_t src;
+ rte_be32_t dst;
+};
+
+struct nfp_flower_tun_ip_ext {
+ uint8_t tos;
+ uint8_t ttl;
+};
+
+/*
+ * Flow Frame IPv4 UDP TUNNEL --> Tunnel details (5W/20B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv4_udp_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ rte_be16_t reserved1;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be32_t reserved2;
+ rte_be32_t tun_id;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 69fc8be..0e1e5ea 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -38,7 +38,8 @@ struct nfp_flow_item_proc {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask);
+ bool is_mask,
+ bool is_outer_layer);
/* List of possible subsequent items. */
const enum rte_flow_item_type *const next_item;
};
@@ -491,6 +492,7 @@ struct nfp_mask_id_entry {
struct nfp_fl_key_ls *key_ls)
{
struct rte_eth_dev *ethdev;
+ bool outer_ip4_flag = false;
const struct rte_flow_item *item;
struct nfp_flower_representor *representor;
const struct rte_flow_item_port_id *port_id;
@@ -526,6 +528,8 @@ struct nfp_mask_id_entry {
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV4 detected");
key_ls->key_layer |= NFP_FLOWER_LAYER_IPV4;
key_ls->key_size += sizeof(struct nfp_flower_ipv4);
+ if (!outer_ip4_flag)
+ outer_ip4_flag = true;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected");
@@ -547,6 +551,21 @@ struct nfp_mask_id_entry {
key_ls->key_layer |= NFP_FLOWER_LAYER_TP;
key_ls->key_size += sizeof(struct nfp_flower_tp_ports);
break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_VXLAN;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN;
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -719,12 +738,25 @@ struct nfp_mask_id_entry {
return ret;
}
+static bool
+nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
+{
+ struct nfp_flower_meta_tci *meta_tci;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
+ return true;
+
+ return false;
+}
+
static int
nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_mac_mpls *eth;
const struct rte_flow_item_eth *spec;
@@ -760,7 +792,8 @@ struct nfp_mask_id_entry {
__rte_unused char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_vlan *spec;
@@ -789,41 +822,58 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ bool is_outer_layer)
{
struct nfp_flower_ipv4 *ipv4;
const struct rte_ipv4_hdr *hdr;
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv4 *spec;
const struct rte_flow_item_ipv4 *mask;
+ struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
- if (spec == NULL) {
- PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
- goto ipv4_end;
- }
+ if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
+ return 0;
+ }
- /*
- * reserve space for L4 info.
- * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
- */
- if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
- *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off;
+ ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_udp_tun->ipv4.src = hdr->src_addr;
+ ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ } else {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
+ goto ipv4_end;
+ }
+
+ /*
+ * reserve space for L4 info.
+ * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
+ */
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off;
- ipv4->ip_ext.tos = hdr->type_of_service;
- ipv4->ip_ext.proto = hdr->next_proto_id;
- ipv4->ip_ext.ttl = hdr->time_to_live;
- ipv4->ipv4_src = hdr->src_addr;
- ipv4->ipv4_dst = hdr->dst_addr;
+ ipv4->ip_ext.tos = hdr->type_of_service;
+ ipv4->ip_ext.proto = hdr->next_proto_id;
+ ipv4->ip_ext.ttl = hdr->time_to_live;
+ ipv4->ipv4_src = hdr->src_addr;
+ ipv4->ipv4_dst = hdr->dst_addr;
ipv4_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4);
+ *mbuf_off += sizeof(struct nfp_flower_ipv4);
+ }
return 0;
}
@@ -833,7 +883,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_ipv6 *ipv6;
const struct rte_ipv6_hdr *hdr;
@@ -878,7 +929,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
uint8_t tcp_flags;
struct nfp_flower_tp_ports *ports;
@@ -950,7 +1002,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ bool is_outer_layer)
{
char *ports_off;
struct nfp_flower_tp_ports *ports;
@@ -964,6 +1017,12 @@ struct nfp_mask_id_entry {
return 0;
}
+ /* Don't add L4 info if working on a inner layer pattern */
+ if (!is_outer_layer) {
+ PMD_DRV_LOG(INFO, "Detected inner layer UDP, skipping.");
+ return 0;
+ }
+
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
@@ -991,7 +1050,8 @@ struct nfp_mask_id_entry {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
char *ports_off;
struct nfp_flower_tp_ports *ports;
@@ -1027,10 +1087,42 @@ struct nfp_mask_id_entry {
return 0;
}
+static int
+nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ const struct rte_vxlan_hdr *hdr;
+ struct nfp_flower_ipv4_udp_tun *tun4;
+ const struct rte_flow_item_vxlan *spec;
+ const struct rte_flow_item_vxlan *mask;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge vxlan: no item->spec!");
+ goto vxlan_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = hdr->vx_vni;
+
+vxlan_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ return 0;
+}
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
- .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4),
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
@@ -1113,6 +1205,7 @@ struct nfp_mask_id_entry {
.merge = nfp_flow_merge_tcp,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN),
.mask_support = &(const struct rte_flow_item_udp){
.hdr = {
.src_port = RTE_BE16(0xffff),
@@ -1134,6 +1227,17 @@ struct nfp_mask_id_entry {
.mask_sz = sizeof(struct rte_flow_item_sctp),
.merge = nfp_flow_merge_sctp,
},
+ [RTE_FLOW_ITEM_TYPE_VXLAN] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &(const struct rte_flow_item_vxlan){
+ .hdr = {
+ .vx_vni = RTE_BE32(0xffffff00),
+ },
+ },
+ .mask_default = &rte_flow_item_vxlan_mask,
+ .mask_sz = sizeof(struct rte_flow_item_vxlan),
+ .merge = nfp_flow_merge_vxlan,
+ },
};
static int
@@ -1187,21 +1291,53 @@ struct nfp_mask_id_entry {
return ret;
}
+static bool
+nfp_flow_is_tun_item(const struct rte_flow_item *item)
+{
+ if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+ return true;
+
+ return false;
+}
+
+static bool
+nfp_flow_inner_item_get(const struct rte_flow_item items[],
+ const struct rte_flow_item **inner_item)
+{
+ const struct rte_flow_item *item;
+
+ *inner_item = items;
+
+ for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ if (nfp_flow_is_tun_item(item)) {
+ *inner_item = ++item;
+ return true;
+ }
+ }
+
+ return false;
+}
+
static int
nfp_flow_compile_item_proc(const struct rte_flow_item items[],
struct rte_flow *nfp_flow,
char **mbuf_off_exact,
- char **mbuf_off_mask)
+ char **mbuf_off_mask,
+ bool is_outer_layer)
{
int i;
int ret = 0;
+ bool continue_flag = true;
const struct rte_flow_item *item;
const struct nfp_flow_item_proc *proc_list;
proc_list = nfp_flow_item_proc_list;
- for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) {
const struct nfp_flow_item_proc *proc = NULL;
+ if (nfp_flow_is_tun_item(item))
+ continue_flag = false;
+
for (i = 0; proc_list->next_item && proc_list->next_item[i]; ++i) {
if (proc_list->next_item[i] == item->type) {
proc = &nfp_flow_item_proc_list[item->type];
@@ -1230,14 +1366,14 @@ struct nfp_mask_id_entry {
}
ret = proc->merge(nfp_flow, mbuf_off_exact, item,
- proc, false);
+ proc, false, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type);
break;
}
ret = proc->merge(nfp_flow, mbuf_off_mask, item,
- proc, true);
+ proc, true, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type);
break;
@@ -1257,6 +1393,9 @@ struct nfp_mask_id_entry {
int ret;
char *mbuf_off_mask;
char *mbuf_off_exact;
+ bool is_tun_flow = false;
+ bool is_outer_layer = true;
+ const struct rte_flow_item *loop_item;
mbuf_off_exact = nfp_flow->payload.unmasked_data +
sizeof(struct nfp_flower_meta_tci) +
@@ -1265,14 +1404,29 @@ struct nfp_mask_id_entry {
sizeof(struct nfp_flower_meta_tci) +
sizeof(struct nfp_flower_in_port);
+ /* Check if this is a tunnel flow and get the inner item*/
+ is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
+ if (is_tun_flow)
+ is_outer_layer = false;
+
/* Go over items */
- ret = nfp_flow_compile_item_proc(items, nfp_flow,
- &mbuf_off_exact, &mbuf_off_mask);
+ ret = nfp_flow_compile_item_proc(loop_item, nfp_flow,
+ &mbuf_off_exact, &mbuf_off_mask, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item compile failed.");
return -EINVAL;
}
+ /* Go over inner items */
+ if (is_tun_flow) {
+ ret = nfp_flow_compile_item_proc(items, nfp_flow,
+ &mbuf_off_exact, &mbuf_off_mask, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "nfp flow outer item compile failed.");
+ return -EINVAL;
+ }
+ }
+
return 0;
}
@@ -2119,12 +2273,35 @@ struct nfp_mask_id_entry {
return 0;
}
+static int
+nfp_flow_tunnel_match(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_flow_tunnel *tunnel,
+ __rte_unused struct rte_flow_item **pmd_items,
+ uint32_t *num_of_items,
+ __rte_unused struct rte_flow_error *err)
+{
+ *num_of_items = 0;
+
+ return 0;
+}
+
+static int
+nfp_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_flow_item *pmd_items,
+ __rte_unused uint32_t num_of_items,
+ __rte_unused struct rte_flow_error *err)
+{
+ return 0;
+}
+
static const struct rte_flow_ops nfp_flow_ops = {
.validate = nfp_flow_validate,
.create = nfp_flow_create,
.destroy = nfp_flow_destroy,
.flush = nfp_flow_flush,
.query = nfp_flow_query,
+ .tunnel_match = nfp_flow_tunnel_match,
+ .tunnel_item_release = nfp_flow_tunnel_item_release,
};
int
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 02/25] net/nfp: support IPv6 VXLAN flow item
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 01/25] net/nfp: support IPv4 VXLAN flow item Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 03/25] net/nfp: prepare for IPv4 tunnel encap flow action Chaoyong He
` (24 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding data structure and logics, to support
the offload of IPv6 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 42 ++++++++++++
drivers/net/nfp/nfp_flow.c | 113 ++++++++++++++++++++++++-------
2 files changed, 129 insertions(+), 26 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 08e2873..996ba3b 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -329,6 +329,11 @@ struct nfp_flower_tun_ipv4 {
rte_be32_t dst;
};
+struct nfp_flower_tun_ipv6 {
+ uint8_t ipv6_src[16];
+ uint8_t ipv6_dst[16];
+};
+
struct nfp_flower_tun_ip_ext {
uint8_t tos;
uint8_t ttl;
@@ -359,6 +364,43 @@ struct nfp_flower_ipv4_udp_tun {
rte_be32_t tun_id;
};
+/*
+ * Flow Frame IPv6 UDP TUNNEL --> Tunnel details (11W/44B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv6_udp_tun {
+ struct nfp_flower_tun_ipv6 ipv6;
+ rte_be16_t reserved1;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be32_t reserved2;
+ rte_be32_t tun_id;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 0e1e5ea..bbd9dba 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -493,6 +493,7 @@ struct nfp_mask_id_entry {
{
struct rte_eth_dev *ethdev;
bool outer_ip4_flag = false;
+ bool outer_ip6_flag = false;
const struct rte_flow_item *item;
struct nfp_flower_representor *representor;
const struct rte_flow_item_port_id *port_id;
@@ -535,6 +536,8 @@ struct nfp_mask_id_entry {
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected");
key_ls->key_layer |= NFP_FLOWER_LAYER_IPV6;
key_ls->key_size += sizeof(struct nfp_flower_ipv6);
+ if (!outer_ip6_flag)
+ outer_ip6_flag = true;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_TCP detected");
@@ -553,8 +556,9 @@ struct nfp_mask_id_entry {
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_VXLAN;
key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN;
if (outer_ip4_flag) {
@@ -564,6 +568,19 @@ struct nfp_mask_id_entry {
* in `struct nfp_flower_ipv4_udp_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for VXLAN tunnel.");
+ return -EINVAL;
}
break;
default:
@@ -884,42 +901,61 @@ struct nfp_mask_id_entry {
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
bool is_mask,
- __rte_unused bool is_outer_layer)
+ bool is_outer_layer)
{
struct nfp_flower_ipv6 *ipv6;
const struct rte_ipv6_hdr *hdr;
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv6 *spec;
const struct rte_flow_item_ipv6 *mask;
+ struct nfp_flower_ipv6_udp_tun *ipv6_udp_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
- if (spec == NULL) {
- PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
- goto ipv6_end;
- }
+ if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
+ return 0;
+ }
- /*
- * reserve space for L4 info.
- * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
- */
- if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
- *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+
+ ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_src));
+ memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+ } else {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
+ goto ipv6_end;
+ }
- hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv6 = (struct nfp_flower_ipv6 *)*mbuf_off;
+ /*
+ * reserve space for L4 info.
+ * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
+ */
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv6 = (struct nfp_flower_ipv6 *)*mbuf_off;
- ipv6->ip_ext.tos = (hdr->vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
- RTE_IPV6_HDR_TC_SHIFT;
- ipv6->ip_ext.proto = hdr->proto;
- ipv6->ip_ext.ttl = hdr->hop_limits;
- memcpy(ipv6->ipv6_src, hdr->src_addr, sizeof(ipv6->ipv6_src));
- memcpy(ipv6->ipv6_dst, hdr->dst_addr, sizeof(ipv6->ipv6_dst));
+ ipv6->ip_ext.tos = (hdr->vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
+ RTE_IPV6_HDR_TC_SHIFT;
+ ipv6->ip_ext.proto = hdr->proto;
+ ipv6->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6->ipv6_src, hdr->src_addr, sizeof(ipv6->ipv6_src));
+ memcpy(ipv6->ipv6_dst, hdr->dst_addr, sizeof(ipv6->ipv6_dst));
ipv6_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv6);
+ *mbuf_off += sizeof(struct nfp_flower_ipv6);
+ }
return 0;
}
@@ -1088,7 +1124,7 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
+nfp_flow_merge_vxlan(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1097,8 +1133,15 @@ struct nfp_mask_id_entry {
{
const struct rte_vxlan_hdr *hdr;
struct nfp_flower_ipv4_udp_tun *tun4;
+ struct nfp_flower_ipv6_udp_tun *tun6;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_vxlan *spec;
const struct rte_flow_item_vxlan *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1109,11 +1152,21 @@ struct nfp_mask_id_entry {
mask = item->mask ? item->mask : proc->mask_default;
hdr = is_mask ? &mask->hdr : &spec->hdr;
- tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- tun4->tun_id = hdr->vx_vni;
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+ tun6->tun_id = hdr->vx_vni;
+ } else {
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = hdr->vx_vni;
+ }
vxlan_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6))
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
+ else
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
return 0;
}
@@ -1122,7 +1175,8 @@ struct nfp_mask_id_entry {
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4),
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_IPV6),
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
@@ -1395,6 +1449,7 @@ struct nfp_mask_id_entry {
char *mbuf_off_exact;
bool is_tun_flow = false;
bool is_outer_layer = true;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item *loop_item;
mbuf_off_exact = nfp_flow->payload.unmasked_data +
@@ -1404,6 +1459,12 @@ struct nfp_mask_id_entry {
sizeof(struct nfp_flower_meta_tci) +
sizeof(struct nfp_flower_in_port);
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) {
+ mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
+ mbuf_off_mask += sizeof(struct nfp_flower_ext_meta);
+ }
+
/* Check if this is a tunnel flow and get the inner item*/
is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
if (is_tun_flow)
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 03/25] net/nfp: prepare for IPv4 tunnel encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 01/25] net/nfp: support IPv4 VXLAN flow item Chaoyong He
2022-10-22 8:24 ` [PATCH v2 02/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 04/25] net/nfp: prepare for IPv6 " Chaoyong He
` (23 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the encap action of IPv4 tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 29 ++++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 93 ++++++++++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 88 ++++++++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.h | 27 ++++++++++
4 files changed, 237 insertions(+)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 15d8381..7021d1f 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -246,3 +246,32 @@
return 0;
}
+
+int
+nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v4 *payload)
+{
+ uint16_t cnt;
+ size_t msg_len;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_neigh_v4 *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v4 tun neigh");
+ return -ENOMEM;
+ }
+
+ msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v4);
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH, msg_len);
+ memcpy(msg, payload, msg_len);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 996ba3b..e44e311 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -129,6 +129,36 @@ struct nfp_flower_cmsg_port_mod {
rte_be16_t mtu;
};
+struct nfp_flower_tun_neigh {
+ uint8_t dst_mac[RTE_ETHER_ADDR_LEN];
+ uint8_t src_mac[RTE_ETHER_ADDR_LEN];
+ rte_be32_t port_id;
+};
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V4
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | DST_IPV4 |
+ * +---------------------------------------------------------------+
+ * 1 | SRC_IPV4 |
+ * +---------------------------------------------------------------+
+ * 2 | DST_MAC_B5_B4_B3_B2 |
+ * +-------------------------------+-------------------------------+
+ * 3 | DST_MAC_B1_B0 | SRC_MAC_B5_B4 |
+ * +-------------------------------+-------------------------------+
+ * 4 | SRC_MAC_B3_B2_B1_B0 |
+ * +---------------------------------------------------------------+
+ * 5 | Egress Port (NFP internal) |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_neigh_v4 {
+ rte_be32_t dst_ipv4;
+ rte_be32_t src_ipv4;
+ struct nfp_flower_tun_neigh common;
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -574,6 +604,67 @@ struct nfp_fl_act_set_tport {
rte_be16_t dst_port;
};
+/*
+ * Pre-tunnel
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | - | opcode | |jump_id| - |M| - |V|
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_127_96 / ipv4_daddr |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_95_64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_63_32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_31_0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_fl_act_pre_tun {
+ struct nfp_fl_act_head head;
+ rte_be16_t flags;
+ union {
+ rte_be32_t ipv4_dst;
+ uint8_t ipv6_dst[16];
+ };
+};
+
+/*
+ * Set tunnel
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | res | opcode | res | len_lw| reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_id0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_id1 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved | type |r| idx |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_flags | ttl | tos |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved_cvs1 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved_cvs2 | reserved_cvs3 |
+ * | var_flags | var_np |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_fl_act_set_tun {
+ struct nfp_fl_act_head head;
+ rte_be16_t reserved;
+ rte_be64_t tun_id;
+ rte_be32_t tun_type_index;
+ rte_be16_t tun_flags;
+ uint8_t ttl;
+ uint8_t tos;
+ rte_be16_t outer_vlan_tpid;
+ rte_be16_t outer_vlan_tci;
+ uint8_t tun_len; /* Only valid for NFP_FL_TUNNEL_GENEVE */
+ uint8_t reserved2;
+ rte_be16_t tun_proto; /* Only valid for NFP_FL_TUNNEL_GENEVE */
+} __rte_packed;
+
int nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower);
int nfp_flower_cmsg_repr_reify(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_representor *repr);
@@ -583,5 +674,7 @@ int nfp_flower_cmsg_flow_delete(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
int nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
+int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v4 *payload);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index bbd9dba..f71f8b1 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1772,6 +1772,91 @@ struct nfp_mask_id_entry {
tc_hl->reserved = 0;
}
+__rte_unused static void
+nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
+ rte_be32_t ipv4_dst)
+{
+ pre_tun->head.jump_id = NFP_FL_ACTION_OPCODE_PRE_TUNNEL;
+ pre_tun->head.len_lw = sizeof(struct nfp_fl_act_pre_tun) >> NFP_FL_LW_SIZ;
+ pre_tun->ipv4_dst = ipv4_dst;
+}
+
+__rte_unused static void
+nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
+ enum nfp_flower_tun_type tun_type,
+ uint64_t tun_id,
+ uint8_t ttl,
+ uint8_t tos)
+{
+ /* Currently only support one pre-tunnel, so index is always 0. */
+ uint8_t pretun_idx = 0;
+ uint32_t tun_type_index;
+
+ tun_type_index = ((tun_type << 4) & 0xf0) | (pretun_idx & 0x07);
+
+ set_tun->head.jump_id = NFP_FL_ACTION_OPCODE_SET_TUNNEL;
+ set_tun->head.len_lw = sizeof(struct nfp_fl_act_set_tun) >> NFP_FL_LW_SIZ;
+ set_tun->tun_type_index = rte_cpu_to_be_32(tun_type_index);
+ set_tun->tun_id = rte_cpu_to_be_64(tun_id);
+ set_tun->ttl = ttl;
+ set_tun->tos = tos;
+}
+
+__rte_unused static int
+nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun,
+ const struct rte_ether_hdr *eth,
+ const struct rte_flow_item_ipv4 *ipv4)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ tun->payload.v6_flag = 0;
+ tun->payload.dst.dst_ipv4 = ipv4->hdr.dst_addr;
+ tun->payload.src.src_ipv4 = ipv4->hdr.src_addr;
+ memcpy(tun->payload.dst_addr, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ port = (struct nfp_flower_in_port *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4->hdr.dst_addr;
+ payload.src_ipv4 = ipv4->hdr.src_addr;
+ memcpy(payload.common.dst_mac, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
+__rte_unused static int
+nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2487,6 +2572,9 @@ struct nfp_mask_id_entry {
goto free_mask_table;
}
+ /* neighbor next list */
+ LIST_INIT(&priv->nn_list);
+
return 0;
free_mask_table:
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 0ad89e5..892dbc0 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -90,6 +90,11 @@ enum nfp_flower_tun_type {
NFP_FL_TUN_GENEVE = 4,
};
+enum nfp_flow_type {
+ NFP_FLOW_COMMON,
+ NFP_FLOW_ENCAP,
+};
+
struct nfp_fl_key_ls {
uint32_t key_layer_two;
uint8_t key_layer;
@@ -118,6 +123,24 @@ struct nfp_fl_payload {
char *action_data;
};
+struct nfp_fl_tun {
+ LIST_ENTRY(nfp_fl_tun) next;
+ uint8_t ref_cnt;
+ struct nfp_fl_tun_entry {
+ uint8_t v6_flag;
+ uint8_t dst_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t src_addr[RTE_ETHER_ADDR_LEN];
+ union {
+ rte_be32_t dst_ipv4;
+ uint8_t dst_ipv6[16];
+ } dst;
+ union {
+ rte_be32_t src_ipv4;
+ uint8_t src_ipv6[16];
+ } src;
+ } payload;
+};
+
#define CIRC_CNT(head, tail, size) (((head) - (tail)) & ((size) - 1))
#define CIRC_SPACE(head, tail, size) CIRC_CNT((tail), ((head) + 1), (size))
struct circ_buf {
@@ -161,13 +184,17 @@ struct nfp_flow_priv {
struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
struct nfp_fl_stats *stats; /**< Store stats of flow. */
rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+ /* neighbor next */
+ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
struct rte_flow {
struct nfp_fl_payload payload;
+ struct nfp_fl_tun tun;
size_t length;
uint32_t hash_key;
bool install_flag;
+ enum nfp_flow_type type;
};
int nfp_flow_priv_init(struct nfp_pf_dev *pf_dev);
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 04/25] net/nfp: prepare for IPv6 tunnel encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (2 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 03/25] net/nfp: prepare for IPv4 tunnel encap flow action Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 05/25] net/nfp: support IPv4 VXLAN " Chaoyong He
` (22 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the encap action of IPv6 tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 29 +++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 40 ++++++++++++
drivers/net/nfp/nfp_flow.c | 105 ++++++++++++++++++++++++++++++-
3 files changed, 173 insertions(+), 1 deletion(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 7021d1f..8983178 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -275,3 +275,32 @@
return 0;
}
+
+int
+nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v6 *payload)
+{
+ uint16_t cnt;
+ size_t msg_len;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_neigh_v6 *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v6 tun neigh");
+ return -ENOMEM;
+ }
+
+ msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v6);
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6, msg_len);
+ memcpy(msg, payload, msg_len);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index e44e311..d1e0562 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -160,6 +160,42 @@ struct nfp_flower_cmsg_tun_neigh_v4 {
};
/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | DST_IPV6 [0] |
+ * +---------------------------------------------------------------+
+ * 1 | DST_IPV6 [1] |
+ * +---------------------------------------------------------------+
+ * 2 | DST_IPV6 [2] |
+ * +---------------------------------------------------------------+
+ * 3 | DST_IPV6 [3] |
+ * +---------------------------------------------------------------+
+ * 4 | SRC_IPV6 [0] |
+ * +---------------------------------------------------------------+
+ * 5 | SRC_IPV6 [1] |
+ * +---------------------------------------------------------------+
+ * 6 | SRC_IPV6 [2] |
+ * +---------------------------------------------------------------+
+ * 7 | SRC_IPV6 [3] |
+ * +---------------------------------------------------------------+
+ * 8 | DST_MAC_B5_B4_B3_B2 |
+ * +-------------------------------+-------------------------------+
+ * 9 | DST_MAC_B1_B0 | SRC_MAC_B5_B4 |
+ * +-------------------------------+-------------------------------+
+ * 10 | SRC_MAC_B3_B2_B1_B0 |
+ * +---------------+---------------+---------------+---------------+
+ * 11 | Egress Port (NFP internal) |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_neigh_v6 {
+ uint8_t dst_ipv6[16];
+ uint8_t src_ipv6[16];
+ struct nfp_flower_tun_neigh common;
+};
+
+/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
* -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
@@ -629,6 +665,8 @@ struct nfp_fl_act_pre_tun {
};
};
+#define NFP_FL_PRE_TUN_IPV6 (1 << 0)
+
/*
* Set tunnel
* 3 2 1
@@ -676,5 +714,7 @@ int nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v4 *payload);
+int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v6 *payload);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index f71f8b1..e1b892f 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1782,6 +1782,16 @@ struct nfp_mask_id_entry {
}
__rte_unused static void
+nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
+ const uint8_t ipv6_dst[])
+{
+ pre_tun->head.jump_id = NFP_FL_ACTION_OPCODE_PRE_TUNNEL;
+ pre_tun->head.len_lw = sizeof(struct nfp_fl_act_pre_tun) >> NFP_FL_LW_SIZ;
+ pre_tun->flags = rte_cpu_to_be_16(NFP_FL_PRE_TUN_IPV6);
+ memcpy(pre_tun->ipv6_dst, ipv6_dst, sizeof(pre_tun->ipv6_dst));
+}
+
+__rte_unused static void
nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
enum nfp_flower_tun_type tun_type,
uint64_t tun_id,
@@ -1845,7 +1855,7 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
{
@@ -1857,6 +1867,99 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun,
+ const struct rte_ether_hdr *eth,
+ const struct rte_flow_item_ipv6 *ipv6)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ tun->payload.v6_flag = 1;
+ memcpy(tun->payload.dst.dst_ipv6, ipv6->hdr.dst_addr, sizeof(tun->payload.dst.dst_ipv6));
+ memcpy(tun->payload.src.src_ipv6, ipv6->hdr.src_addr, sizeof(tun->payload.src.src_ipv6));
+ memcpy(tun->payload.dst_addr, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ port = (struct nfp_flower_in_port *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6->hdr.dst_addr, sizeof(payload.dst_ipv6));
+ memcpy(payload.src_ipv6, ipv6->hdr.src_addr, sizeof(payload.src_ipv6));
+ memcpy(payload.common.dst_mac, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
+static int
+nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t *ipv6)
+{
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6, sizeof(payload.dst_ipv6));
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
+__rte_unused static int
+nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ bool flag = false;
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+
+ tun = &nfp_flow->tun;
+ LIST_FOREACH(tmp, &app_fw_flower->flow_priv->nn_list, next) {
+ ret = memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry));
+ if (ret == 0) {
+ tmp->ref_cnt--;
+ flag = true;
+ break;
+ }
+ }
+
+ if (!flag) {
+ PMD_DRV_LOG(DEBUG, "Can't find nn entry in the nn list");
+ return -EINVAL;
+ }
+
+ if (tmp->ref_cnt == 0) {
+ LIST_REMOVE(tmp, next);
+ if (tmp->payload.v6_flag != 0) {
+ return nfp_flower_del_tun_neigh_v6(app_fw_flower,
+ tmp->payload.dst.dst_ipv6);
+ } else {
+ return nfp_flower_del_tun_neigh_v4(app_fw_flower,
+ tmp->payload.dst.dst_ipv4);
+ }
+ }
+
+ return 0;
+}
+
static int
nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 05/25] net/nfp: support IPv4 VXLAN encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (3 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 04/25] net/nfp: prepare for IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 06/25] net/nfp: support IPv6 " Chaoyong He
` (21 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 132 +++++++++++++++++++++++++++++++++++++--
2 files changed, 128 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index faaa7da..ff97787 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -56,3 +56,4 @@ set_mac_src = Y
set_tp_dst = Y
set_tp_src = Y
set_ttl = Y
+vxlan_encap = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index e1b892f..d2e779c 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -10,8 +10,10 @@
#include <rte_malloc.h>
#include "nfp_common.h"
+#include "nfp_ctrl.h"
#include "nfp_flow.h"
#include "nfp_logs.h"
+#include "nfp_rxtx.h"
#include "flower/nfp_flower.h"
#include "flower/nfp_flower_cmsg.h"
#include "flower/nfp_flower_ctrl.h"
@@ -19,6 +21,17 @@
#include "nfpcore/nfp_mip.h"
#include "nfpcore/nfp_rtsym.h"
+/*
+ * Maximum number of items in struct rte_flow_action_vxlan_encap.
+ * ETH / IPv4(6) / UDP / VXLAN / END
+ */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 5
+
+struct vxlan_data {
+ struct rte_flow_action_vxlan_encap conf;
+ struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+};
+
/* Static initializer for a list of subsequent item types */
#define NEXT_ITEM(...) \
((const enum rte_flow_item_type []){ \
@@ -724,6 +737,11 @@ struct nfp_mask_id_entry {
tc_hl_flag = true;
}
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP detected");
+ key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
+ key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1772,7 +1790,7 @@ struct nfp_mask_id_entry {
tc_hl->reserved = 0;
}
-__rte_unused static void
+static void
nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
rte_be32_t ipv4_dst)
{
@@ -1791,7 +1809,7 @@ struct nfp_mask_id_entry {
memcpy(pre_tun->ipv6_dst, ipv6_dst, sizeof(pre_tun->ipv6_dst));
}
-__rte_unused static void
+static void
nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
enum nfp_flower_tun_type tun_type,
uint64_t tun_id,
@@ -1812,7 +1830,7 @@ struct nfp_mask_id_entry {
set_tun->tos = tos;
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct nfp_fl_tun *tun,
@@ -1922,7 +1940,7 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -1961,7 +1979,81 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
+nfp_flow_action_vxlan_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct vxlan_data *vxlan_data,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ const struct rte_flow_item_eth *eth;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_vxlan *vxlan;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_flow_item_eth *)vxlan_data->items[0].spec;
+ ipv4 = (const struct rte_flow_item_ipv4 *)vxlan_data->items[1].spec;
+ vxlan = (const struct rte_flow_item_vxlan *)vxlan_data->items[3].spec;
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_VXLAN, vxlan->hdr.vx_vni,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_flags = vxlan->hdr.vx_flags;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, ð->hdr, ipv4);
+}
+
+static int
+nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ size_t act_len;
+ size_t act_pre_size;
+ const struct vxlan_data *vxlan_data;
+
+ vxlan_data = action->conf;
+ if (vxlan_data->items[0].type != RTE_FLOW_ITEM_TYPE_ETH ||
+ vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ vxlan_data->items[2].type != RTE_FLOW_ITEM_TYPE_UDP ||
+ vxlan_data->items[3].type != RTE_FLOW_ITEM_TYPE_VXLAN ||
+ vxlan_data->items[4].type != RTE_FLOW_ITEM_TYPE_END) {
+ PMD_DRV_LOG(ERR, "Not an valid vxlan action conf.");
+ return -EINVAL;
+ }
+
+ /*
+ * Pre_tunnel action must be the first on the action list.
+ * If other actions already exist, they need to be pushed forward.
+ */
+ act_len = act_data - actions;
+ if (act_len != 0) {
+ act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ memmove(actions + act_pre_size, actions, act_len);
+ }
+
+ if (vxlan_data->items[1].type == RTE_FLOW_ITEM_TYPE_IPV4)
+ return nfp_flow_action_vxlan_encap_v4(app_fw_flower, act_data,
+ actions, vxlan_data, nfp_flow_meta, tun);
+
+ return 0;
+}
+
+static int
+nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
{
@@ -2118,6 +2210,20 @@ struct nfp_mask_id_entry {
tc_hl_flag = true;
}
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP");
+ ret = nfp_flow_action_vxlan_encap(representor->app_fw_flower,
+ position, action_data, action, nfp_flow_meta,
+ &nfp_flow->tun);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process"
+ " RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP");
+ return ret;
+ }
+ position += sizeof(struct nfp_fl_act_pre_tun);
+ position += sizeof(struct nfp_fl_act_set_tun);
+ nfp_flow->type = NFP_FLOW_ENCAP;
+ break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
return -ENOTSUP;
@@ -2410,6 +2516,22 @@ struct nfp_mask_id_entry {
goto exit;
}
+ switch (nfp_flow->type) {
+ case NFP_FLOW_COMMON:
+ break;
+ case NFP_FLOW_ENCAP:
+ /* Delete the entry from nn table */
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Invalid nfp flow type %d.", nfp_flow->type);
+ ret = -EINVAL;
+ break;
+ }
+
+ if (ret != 0)
+ goto exit;
+
/* Delete the flow from hardware */
if (nfp_flow->install_flag) {
ret = nfp_flower_cmsg_flow_delete(app_fw_flower, nfp_flow);
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 06/25] net/nfp: support IPv6 VXLAN encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (4 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 05/25] net/nfp: support IPv4 VXLAN " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 07/25] net/nfp: prepare for IPv4 UDP tunnel decap " Chaoyong He
` (20 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv6 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 48 +++++++++++++++++++++++++++++++++++++++++-----
1 file changed, 43 insertions(+), 5 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index d2e779c..9ee02b0 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1799,7 +1799,7 @@ struct nfp_mask_id_entry {
pre_tun->ipv4_dst = ipv4_dst;
}
-__rte_unused static void
+static void
nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
const uint8_t ipv6_dst[])
{
@@ -1885,7 +1885,7 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct nfp_fl_tun *tun,
@@ -2014,6 +2014,42 @@ struct nfp_mask_id_entry {
}
static int
+nfp_flow_action_vxlan_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct vxlan_data *vxlan_data,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ const struct rte_flow_item_eth *eth;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_vxlan *vxlan;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_flow_item_eth *)vxlan_data->items[0].spec;
+ ipv6 = (const struct rte_flow_item_ipv6 *)vxlan_data->items[1].spec;
+ vxlan = (const struct rte_flow_item_vxlan *)vxlan_data->items[3].spec;
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_VXLAN, vxlan->hdr.vx_vni,
+ ipv6->hdr.hop_limits,
+ (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff);
+ set_tun->tun_flags = vxlan->hdr.vx_flags;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, ð->hdr, ipv6);
+}
+
+static int
nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2027,7 +2063,8 @@ struct nfp_mask_id_entry {
vxlan_data = action->conf;
if (vxlan_data->items[0].type != RTE_FLOW_ITEM_TYPE_ETH ||
- vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ (vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV6) ||
vxlan_data->items[2].type != RTE_FLOW_ITEM_TYPE_UDP ||
vxlan_data->items[3].type != RTE_FLOW_ITEM_TYPE_VXLAN ||
vxlan_data->items[4].type != RTE_FLOW_ITEM_TYPE_END) {
@@ -2048,8 +2085,9 @@ struct nfp_mask_id_entry {
if (vxlan_data->items[1].type == RTE_FLOW_ITEM_TYPE_IPV4)
return nfp_flow_action_vxlan_encap_v4(app_fw_flower, act_data,
actions, vxlan_data, nfp_flow_meta, tun);
-
- return 0;
+ else
+ return nfp_flow_action_vxlan_encap_v6(app_fw_flower, act_data,
+ actions, vxlan_data, nfp_flow_meta, tun);
}
static int
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 07/25] net/nfp: prepare for IPv4 UDP tunnel decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (5 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 06/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 08/25] net/nfp: prepare for IPv6 " Chaoyong He
` (19 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the decap action of IPv4 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 118 ++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 94 +++++++
drivers/net/nfp/nfp_flow.c | 461 ++++++++++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 17 ++
4 files changed, 675 insertions(+), 15 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 8983178..f18f3de 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -304,3 +304,121 @@
return 0;
}
+
+int
+nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower)
+{
+ uint16_t cnt;
+ uint32_t count = 0;
+ struct rte_mbuf *mbuf;
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_flower_cmsg_tun_ipv4_addr *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v4 tun addr");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_IPS, sizeof(*msg));
+
+ priv = app_fw_flower->flow_priv;
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (count >= NFP_FL_IPV4_ADDRS_MAX) {
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ PMD_DRV_LOG(ERR, "IPv4 offload exceeds limit.");
+ return -ERANGE;
+ }
+ msg->ipv4_addr[count] = entry->ipv4_addr;
+ count++;
+ }
+ msg->count = rte_cpu_to_be_32(count);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ uint16_t mac_idx,
+ bool is_del)
+{
+ uint16_t cnt;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_pre_tun_rule *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for pre tunnel rule");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_PRE_TUN_RULE, sizeof(*msg));
+
+ meta_tci = (struct nfp_flower_meta_tci *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata));
+ if (meta_tci->tci)
+ msg->vlan_tci = meta_tci->tci;
+ else
+ msg->vlan_tci = 0xffff;
+
+ if (is_del)
+ msg->flags = rte_cpu_to_be_32(NFP_TUN_PRE_TUN_RULE_DEL);
+
+ msg->port_idx = rte_cpu_to_be_16(mac_idx);
+ msg->host_ctx_id = nfp_flow_meta->host_ctx_id;
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+nfp_flower_cmsg_tun_mac_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_ether_addr *mac,
+ uint16_t mac_idx,
+ bool is_del)
+{
+ uint16_t cnt;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_mac *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for tunnel mac");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_MAC, sizeof(*msg));
+
+ msg->count = rte_cpu_to_be_16(1);
+ msg->index = rte_cpu_to_be_16(mac_idx);
+ rte_ether_addr_copy(mac, &msg->addr);
+ if (is_del)
+ msg->flags = rte_cpu_to_be_16(NFP_TUN_MAC_OFFLOAD_DEL_FLAG);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index d1e0562..0933dac 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -195,6 +195,91 @@ struct nfp_flower_cmsg_tun_neigh_v6 {
struct nfp_flower_tun_neigh common;
};
+#define NFP_TUN_PRE_TUN_RULE_DEL (1 << 0)
+#define NFP_TUN_PRE_TUN_IDX_BIT (1 << 3)
+#define NFP_TUN_PRE_TUN_IPV6_BIT (1 << 7)
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_PRE_TUN_RULE
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | FLAGS |
+ * +---------------------------------------------------------------+
+ * 1 | MAC_IDX | VLAN_ID |
+ * +---------------------------------------------------------------+
+ * 2 | HOST_CTX |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_pre_tun_rule {
+ rte_be32_t flags;
+ rte_be16_t port_idx;
+ rte_be16_t vlan_tci;
+ rte_be32_t host_ctx_id;
+};
+
+#define NFP_TUN_MAC_OFFLOAD_DEL_FLAG 0x2
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_MAC
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * Word +-----------------------+---+-+-+---------------+---------------+
+ * 0 | spare |NBI|D|F| Amount of MAC’s in this msg |
+ * +---------------+-------+---+-+-+---------------+---------------+
+ * 1 | Index 0 | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 2 | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ * 3 | Index 1 | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 4 | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ * ...
+ * +---------------+---------------+---------------+---------------+
+ * 2N-1 | Index N | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 2N | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ *
+ * F: Flush bit. Set if entire table must be flushed. Rest of info in cmsg
+ * will be ignored. Not implemented.
+ * D: Delete bit. Set if entry must be deleted instead of added
+ * NBI: Network Block Interface. Set to 0
+ * The amount of MAC’s per control message is limited only by the packet
+ * buffer size. A 2048B buffer can fit 253 MAC address and a 10240B buffer
+ * 1277 MAC addresses.
+ */
+struct nfp_flower_cmsg_tun_mac {
+ rte_be16_t flags;
+ rte_be16_t count; /**< Should always be 1 */
+ rte_be16_t index;
+ struct rte_ether_addr addr;
+};
+
+#define NFP_FL_IPV4_ADDRS_MAX 32
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_IPS
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | Number of IP Addresses |
+ * +---------------------------------------------------------------+
+ * 1 | IP Address #1 |
+ * +---------------------------------------------------------------+
+ * 2 | IP Address #2 |
+ * +---------------------------------------------------------------+
+ * | ... |
+ * +---------------------------------------------------------------+
+ * 32 | IP Address #32 |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_ipv4_addr {
+ rte_be32_t count;
+ rte_be32_t ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -716,5 +801,14 @@ int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v4 *payload);
int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v6 *payload);
+int nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower);
+int nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ uint16_t mac_idx,
+ bool is_del);
+int nfp_flower_cmsg_tun_mac_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_ether_addr *mac,
+ uint16_t mac_idx,
+ bool is_del);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 9ee02b0..c088d24 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -47,7 +47,8 @@ struct nfp_flow_item_proc {
/* Size in bytes for @p mask_support and @p mask_default. */
const unsigned int mask_sz;
/* Merge a pattern item into a flow rule handle. */
- int (*merge)(struct rte_flow *nfp_flow,
+ int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -63,6 +64,12 @@ struct nfp_mask_id_entry {
uint8_t mask_id;
};
+struct nfp_pre_tun_entry {
+ uint16_t mac_index;
+ uint16_t ref_cnt;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+} __rte_aligned(32);
+
static inline struct nfp_flow_priv *
nfp_flow_dev_to_priv(struct rte_eth_dev *dev)
{
@@ -406,6 +413,83 @@ struct nfp_mask_id_entry {
return 0;
}
+__rte_unused static int
+nfp_tun_add_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_ipv4_addr_entry *tmp_entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count++;
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ return 0;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ tmp_entry = rte_zmalloc("nfp_ipv4_off", sizeof(struct nfp_ipv4_addr_entry), 0);
+ if (tmp_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Mem error when offloading IP address.");
+ return -ENOMEM;
+ }
+
+ tmp_entry->ipv4_addr = ipv4;
+ tmp_entry->ref_count = 1;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_INSERT_HEAD(&priv->ipv4_off_list, tmp_entry, next);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ return nfp_flower_cmsg_tun_off_v4(app_fw_flower);
+}
+
+static int
+nfp_tun_del_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count--;
+ if (entry->ref_count == 0) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ return nfp_flower_cmsg_tun_off_v4(app_fw_flower);
+ }
+ break;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ return 0;
+}
+
+static int
+nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ struct nfp_flower_ipv4_udp_tun *udp4;
+
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+
+ return ret;
+}
+
static void
nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
{
@@ -635,6 +719,9 @@ struct nfp_mask_id_entry {
case RTE_FLOW_ACTION_TYPE_COUNT:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_COUNT detected");
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_JUMP detected");
+ break;
case RTE_FLOW_ACTION_TYPE_PORT_ID:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_PORT_ID detected");
key_ls->act_size += sizeof(struct nfp_fl_act_output);
@@ -786,7 +873,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
+nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -823,7 +911,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_vlan(struct rte_flow *nfp_flow,
+nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
__rte_unused char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -853,7 +942,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_ipv4(struct rte_flow *nfp_flow,
+nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -914,7 +1004,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_ipv6(struct rte_flow *nfp_flow,
+nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -979,7 +1070,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_tcp(struct rte_flow *nfp_flow,
+nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1052,7 +1144,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_udp(struct rte_flow *nfp_flow,
+nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1100,7 +1193,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
+nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1142,7 +1236,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_merge_vxlan(struct rte_flow *nfp_flow,
+nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1391,7 +1486,8 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_compile_item_proc(const struct rte_flow_item items[],
+nfp_flow_compile_item_proc(struct nfp_flower_representor *repr,
+ const struct rte_flow_item items[],
struct rte_flow *nfp_flow,
char **mbuf_off_exact,
char **mbuf_off_mask,
@@ -1402,6 +1498,7 @@ struct nfp_mask_id_entry {
bool continue_flag = true;
const struct rte_flow_item *item;
const struct nfp_flow_item_proc *proc_list;
+ struct nfp_app_fw_flower *app_fw_flower = repr->app_fw_flower;
proc_list = nfp_flow_item_proc_list;
for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) {
@@ -1437,14 +1534,14 @@ struct nfp_mask_id_entry {
break;
}
- ret = proc->merge(nfp_flow, mbuf_off_exact, item,
+ ret = proc->merge(app_fw_flower, nfp_flow, mbuf_off_exact, item,
proc, false, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type);
break;
}
- ret = proc->merge(nfp_flow, mbuf_off_mask, item,
+ ret = proc->merge(app_fw_flower, nfp_flow, mbuf_off_mask, item,
proc, true, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type);
@@ -1458,7 +1555,7 @@ struct nfp_mask_id_entry {
}
static int
-nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
+nfp_flow_compile_items(struct nfp_flower_representor *representor,
const struct rte_flow_item items[],
struct rte_flow *nfp_flow)
{
@@ -1489,7 +1586,7 @@ struct nfp_mask_id_entry {
is_outer_layer = false;
/* Go over items */
- ret = nfp_flow_compile_item_proc(loop_item, nfp_flow,
+ ret = nfp_flow_compile_item_proc(representor, loop_item, nfp_flow,
&mbuf_off_exact, &mbuf_off_mask, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item compile failed.");
@@ -1498,7 +1595,7 @@ struct nfp_mask_id_entry {
/* Go over inner items */
if (is_tun_flow) {
- ret = nfp_flow_compile_item_proc(items, nfp_flow,
+ ret = nfp_flow_compile_item_proc(representor, items, nfp_flow,
&mbuf_off_exact, &mbuf_off_mask, true);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow outer item compile failed.");
@@ -1873,6 +1970,59 @@ struct nfp_mask_id_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_ipv4 *ipv4;
+ struct nfp_flower_mac_mpls *eth;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ port = (struct nfp_flower_in_port *)(meta_tci + 1);
+ eth = (struct nfp_flower_mac_mpls *)(port + 1);
+
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls) +
+ sizeof(struct nfp_flower_tp_ports));
+ else
+ ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls));
+
+ tun = &nfp_flow->tun;
+ tun->payload.v6_flag = 0;
+ tun->payload.dst.dst_ipv4 = ipv4->ipv4_src;
+ tun->payload.src.src_ipv4 = ipv4->ipv4_dst;
+ memcpy(tun->payload.dst_addr, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4->ipv4_src;
+ payload.src_ipv4 = ipv4->ipv4_dst;
+ memcpy(payload.common.dst_mac, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
@@ -2090,6 +2240,200 @@ struct nfp_mask_id_entry {
actions, vxlan_data, nfp_flow_meta, tun);
}
+static struct nfp_pre_tun_entry *
+nfp_pre_tun_table_search(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int index;
+ uint32_t hash_key;
+ struct nfp_pre_tun_entry *mac_index;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ index = rte_hash_lookup_data(priv->pre_tun_table, &hash_key, (void **)&mac_index);
+ if (index < 0) {
+ PMD_DRV_LOG(DEBUG, "Data NOT found in the hash table");
+ return NULL;
+ }
+
+ return mac_index;
+}
+
+static bool
+nfp_pre_tun_table_add(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int ret;
+ uint32_t hash_key;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ ret = rte_hash_add_key_data(priv->pre_tun_table, &hash_key, hash_data);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Add to pre tunnel table failed");
+ return false;
+ }
+
+ return true;
+}
+
+static bool
+nfp_pre_tun_table_delete(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int ret;
+ uint32_t hash_key;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ ret = rte_hash_del_key(priv->pre_tun_table, &hash_key);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Delete from pre tunnel table failed");
+ return false;
+ }
+
+ return true;
+}
+
+__rte_unused static int
+nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
+ uint16_t *index)
+{
+ uint16_t i;
+ uint32_t entry_size;
+ uint16_t mac_index = 1;
+ struct nfp_flow_priv *priv;
+ struct nfp_pre_tun_entry *entry;
+ struct nfp_pre_tun_entry *find_entry;
+
+ priv = repr->app_fw_flower->flow_priv;
+ if (priv->pre_tun_cnt >= NFP_TUN_PRE_TUN_RULE_LIMIT) {
+ PMD_DRV_LOG(ERR, "Pre tunnel table has full");
+ return -EINVAL;
+ }
+
+ entry_size = sizeof(struct nfp_pre_tun_entry);
+ entry = rte_zmalloc("nfp_pre_tun", entry_size, 0);
+ if (entry == NULL) {
+ PMD_DRV_LOG(ERR, "Memory alloc failed for pre tunnel table");
+ return -ENOMEM;
+ }
+
+ entry->ref_cnt = 1U;
+ memcpy(entry->mac_addr, repr->mac_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* 0 is considered a failed match */
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0)
+ continue;
+ entry->mac_index = i;
+ find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
+ if (find_entry != NULL) {
+ find_entry->ref_cnt++;
+ *index = find_entry->mac_index;
+ rte_free(entry);
+ return 0;
+ }
+ }
+
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0) {
+ priv->pre_tun_bitmap[i] = 1U;
+ mac_index = i;
+ break;
+ }
+ }
+
+ entry->mac_index = mac_index;
+ if (!nfp_pre_tun_table_add(priv, (char *)entry, entry_size)) {
+ rte_free(entry);
+ return -EINVAL;
+ }
+
+ *index = entry->mac_index;
+ priv->pre_tun_cnt++;
+ return 0;
+}
+
+static int
+nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
+ struct rte_flow *nfp_flow)
+{
+ uint16_t i;
+ int ret = 0;
+ uint32_t entry_size;
+ uint16_t nfp_mac_idx;
+ struct nfp_flow_priv *priv;
+ struct nfp_pre_tun_entry *entry;
+ struct nfp_pre_tun_entry *find_entry;
+ struct nfp_fl_rule_metadata *nfp_flow_meta;
+
+ priv = repr->app_fw_flower->flow_priv;
+ if (priv->pre_tun_cnt == 1)
+ return 0;
+
+ entry_size = sizeof(struct nfp_pre_tun_entry);
+ entry = rte_zmalloc("nfp_pre_tun", entry_size, 0);
+ if (entry == NULL) {
+ PMD_DRV_LOG(ERR, "Memory alloc failed for pre tunnel table");
+ return -ENOMEM;
+ }
+
+ entry->ref_cnt = 1U;
+ memcpy(entry->mac_addr, repr->mac_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* 0 is considered a failed match */
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0)
+ continue;
+ entry->mac_index = i;
+ find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
+ if (find_entry != NULL) {
+ find_entry->ref_cnt--;
+ if (find_entry->ref_cnt != 0)
+ goto free_entry;
+ priv->pre_tun_bitmap[i] = 0;
+ break;
+ }
+ }
+
+ nfp_flow_meta = nfp_flow->payload.meta;
+ nfp_mac_idx = (find_entry->mac_index << 8) |
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
+ NFP_TUN_PRE_TUN_IDX_BIT;
+ ret = nfp_flower_cmsg_tun_mac_rule(repr->app_fw_flower, &repr->mac_addr,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send tunnel mac rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ find_entry->ref_cnt = 1U;
+ if (!nfp_pre_tun_table_delete(priv, (char *)find_entry, entry_size)) {
+ PMD_DRV_LOG(ERR, "Delete entry from pre tunnel table failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ rte_free(entry);
+ rte_free(find_entry);
+ priv->pre_tun_cnt--;
+
+free_entry:
+ rte_free(entry);
+
+ return ret;
+}
+
static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2125,6 +2469,9 @@ struct nfp_mask_id_entry {
case RTE_FLOW_ACTION_TYPE_COUNT:
PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_COUNT");
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_JUMP");
+ break;
case RTE_FLOW_ACTION_TYPE_PORT_ID:
PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_PORT_ID");
ret = nfp_flow_action_output(position, action, nfp_flow_meta);
@@ -2561,6 +2908,15 @@ struct nfp_mask_id_entry {
/* Delete the entry from nn table */
ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
break;
+ case NFP_FLOW_DECAP:
+ /* Delete the entry from nn table */
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ if (ret != 0)
+ goto exit;
+
+ /* Delete the entry in pre tunnel table */
+ ret = nfp_pre_tun_table_check_del(representor, nfp_flow);
+ break;
default:
PMD_DRV_LOG(ERR, "Invalid nfp flow type %d.", nfp_flow->type);
ret = -EINVAL;
@@ -2570,6 +2926,10 @@ struct nfp_mask_id_entry {
if (ret != 0)
goto exit;
+ /* Delete the ip off */
+ if (nfp_flow_is_tunnel(nfp_flow))
+ nfp_tun_check_ip_off_del(representor, nfp_flow);
+
/* Delete the flow from hardware */
if (nfp_flow->install_flag) {
ret = nfp_flower_cmsg_flow_delete(app_fw_flower, nfp_flow);
@@ -2703,6 +3063,49 @@ struct nfp_mask_id_entry {
return 0;
}
+static int
+nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
+ struct rte_flow_tunnel *tunnel,
+ struct rte_flow_action **pmd_actions,
+ uint32_t *num_of_actions,
+ __rte_unused struct rte_flow_error *err)
+{
+ struct rte_flow_action *nfp_action;
+
+ nfp_action = rte_zmalloc("nfp_tun_action", sizeof(struct rte_flow_action), 0);
+ if (nfp_action == NULL) {
+ PMD_DRV_LOG(ERR, "Alloc memory for nfp tunnel action failed.");
+ return -ENOMEM;
+ }
+
+ switch (tunnel->type) {
+ default:
+ *pmd_actions = NULL;
+ *num_of_actions = 0;
+ rte_free(nfp_action);
+ break;
+ }
+
+ return 0;
+}
+
+static int
+nfp_flow_tunnel_action_decap_release(__rte_unused struct rte_eth_dev *dev,
+ struct rte_flow_action *pmd_actions,
+ uint32_t num_of_actions,
+ __rte_unused struct rte_flow_error *err)
+{
+ uint32_t i;
+ struct rte_flow_action *nfp_action;
+
+ for (i = 0; i < num_of_actions; i++) {
+ nfp_action = &pmd_actions[i];
+ rte_free(nfp_action);
+ }
+
+ return 0;
+}
+
static const struct rte_flow_ops nfp_flow_ops = {
.validate = nfp_flow_validate,
.create = nfp_flow_create,
@@ -2711,6 +3114,8 @@ struct nfp_mask_id_entry {
.query = nfp_flow_query,
.tunnel_match = nfp_flow_tunnel_match,
.tunnel_item_release = nfp_flow_tunnel_item_release,
+ .tunnel_decap_set = nfp_flow_tunnel_decap_set,
+ .tunnel_action_decap_release = nfp_flow_tunnel_action_decap_release,
};
int
@@ -2755,6 +3160,15 @@ struct nfp_mask_id_entry {
.extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY,
};
+ struct rte_hash_parameters pre_tun_hash_params = {
+ .name = "pre_tunnel_table",
+ .entries = 32,
+ .hash_func = rte_jhash,
+ .socket_id = rte_socket_id(),
+ .key_len = sizeof(uint32_t),
+ .extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY,
+ };
+
ctx_count = nfp_rtsym_read_le(pf_dev->sym_tbl,
"CONFIG_FC_HOST_CTX_COUNT", &ret);
if (ret < 0) {
@@ -2835,11 +3249,27 @@ struct nfp_mask_id_entry {
goto free_mask_table;
}
+ /* pre tunnel table */
+ priv->pre_tun_cnt = 1;
+ pre_tun_hash_params.hash_func_init_val = priv->hash_seed;
+ priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params);
+ if (priv->pre_tun_table == NULL) {
+ PMD_INIT_LOG(ERR, "Pre tunnel table creation failed");
+ ret = -ENOMEM;
+ goto free_flow_table;
+ }
+
+ /* ipv4 off list */
+ rte_spinlock_init(&priv->ipv4_off_lock);
+ LIST_INIT(&priv->ipv4_off_list);
+
/* neighbor next list */
LIST_INIT(&priv->nn_list);
return 0;
+free_flow_table:
+ rte_hash_free(priv->flow_table);
free_mask_table:
rte_free(priv->mask_table);
free_stats:
@@ -2863,6 +3293,7 @@ struct nfp_mask_id_entry {
app_fw_flower = NFP_PRIV_TO_APP_FW_FLOWER(pf_dev->app_fw_priv);
priv = app_fw_flower->flow_priv;
+ rte_hash_free(priv->pre_tun_table);
rte_hash_free(priv->flow_table);
rte_hash_free(priv->mask_table);
rte_free(priv->stats);
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 892dbc0..f536da2 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -6,6 +6,7 @@
#ifndef _NFP_FLOW_H_
#define _NFP_FLOW_H_
+#include <sys/queue.h>
#include <rte_bitops.h>
#include <ethdev_driver.h>
@@ -93,6 +94,7 @@ enum nfp_flower_tun_type {
enum nfp_flow_type {
NFP_FLOW_COMMON,
NFP_FLOW_ENCAP,
+ NFP_FLOW_DECAP,
};
struct nfp_fl_key_ls {
@@ -169,6 +171,14 @@ struct nfp_fl_stats {
uint64_t bytes;
};
+struct nfp_ipv4_addr_entry {
+ LIST_ENTRY(nfp_ipv4_addr_entry) next;
+ rte_be32_t ipv4_addr;
+ int ref_count;
+};
+
+#define NFP_TUN_PRE_TUN_RULE_LIMIT 32
+
struct nfp_flow_priv {
uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
uint64_t flower_version; /**< Flow version, always increase. */
@@ -184,6 +194,13 @@ struct nfp_flow_priv {
struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
struct nfp_fl_stats *stats; /**< Store stats of flow. */
rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+ /* pre tunnel rule */
+ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
+ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
+ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
+ /* IPv4 off */
+ LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
+ rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
/* neighbor next */
LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 08/25] net/nfp: prepare for IPv6 UDP tunnel decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (6 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 07/25] net/nfp: prepare for IPv4 UDP tunnel decap " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 09/25] net/nfp: support IPv4 VXLAN " Chaoyong He
` (18 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the decap action of IPv6 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 42 +++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 24 +++++
drivers/net/nfp/nfp_flow.c | 145 ++++++++++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 9 ++
4 files changed, 217 insertions(+), 3 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index f18f3de..76815cf 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -348,6 +348,48 @@
}
int
+nfp_flower_cmsg_tun_off_v6(struct nfp_app_fw_flower *app_fw_flower)
+{
+ uint16_t cnt;
+ uint32_t count = 0;
+ struct rte_mbuf *mbuf;
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+ struct nfp_flower_cmsg_tun_ipv6_addr *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v6 tun addr");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_IPS_V6, sizeof(*msg));
+
+ priv = app_fw_flower->flow_priv;
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (count >= NFP_FL_IPV6_ADDRS_MAX) {
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ PMD_DRV_LOG(ERR, "IPv6 offload exceeds limit.");
+ return -ERANGE;
+ }
+ memcpy(&msg->ipv6_addr[count * 16], entry->ipv6_addr, 16UL);
+ count++;
+ }
+ msg->count = rte_cpu_to_be_32(count);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
uint16_t mac_idx,
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 0933dac..61f2f83 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -280,6 +280,29 @@ struct nfp_flower_cmsg_tun_ipv4_addr {
rte_be32_t ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
};
+#define NFP_FL_IPV6_ADDRS_MAX 4
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_IP_V6
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | Number of IP Addresses |
+ * +---------------------------------------------------------------+
+ * 1 | IP Address1 #1 |
+ * +---------------------------------------------------------------+
+ * 2 | IP Address1 #2 |
+ * +---------------------------------------------------------------+
+ * | ... |
+ * +---------------------------------------------------------------+
+ * 16 | IP Address4 #4 |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_ipv6_addr {
+ rte_be32_t count;
+ uint8_t ipv6_addr[NFP_FL_IPV6_ADDRS_MAX * 16];
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -802,6 +825,7 @@ int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v6 *payload);
int nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower);
+int nfp_flower_cmsg_tun_off_v6(struct nfp_app_fw_flower *app_fw_flower);
int nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
uint16_t mac_idx,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index c088d24..ad484b9 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -476,16 +476,95 @@ struct nfp_pre_tun_entry {
return 0;
}
+__rte_unused static int
+nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t ipv6[])
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+ struct nfp_ipv6_addr_entry *tmp_entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+ entry->ref_count++;
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ return 0;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ tmp_entry = rte_zmalloc("nfp_ipv6_off", sizeof(struct nfp_ipv6_addr_entry), 0);
+ if (tmp_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address.");
+ return -ENOMEM;
+ }
+ memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr));
+ tmp_entry->ref_count = 1;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_INSERT_HEAD(&priv->ipv6_off_list, tmp_entry, next);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ return nfp_flower_cmsg_tun_off_v6(app_fw_flower);
+}
+
+static int
+nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t ipv6[])
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+ entry->ref_count--;
+ if (entry->ref_count == 0) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ return nfp_flower_cmsg_tun_off_v6(app_fw_flower);
+ }
+ break;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ return 0;
+}
+
static int
nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
struct rte_flow *nfp_flow)
{
int ret;
+ uint32_t key_layer2 = 0;
struct nfp_flower_ipv4_udp_tun *udp4;
+ struct nfp_flower_ipv6_udp_tun *udp6;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
- udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv4_udp_tun));
- ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
+
+ if (ext_meta != NULL)
+ key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
+
+ if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
+ udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_udp_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ } else {
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ }
return ret;
}
@@ -2078,6 +2157,59 @@ struct nfp_pre_tun_entry {
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_ipv6 *ipv6;
+ struct nfp_flower_mac_mpls *eth;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ port = (struct nfp_flower_in_port *)(meta_tci + 1);
+ eth = (struct nfp_flower_mac_mpls *)(port + 1);
+
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls) +
+ sizeof(struct nfp_flower_tp_ports));
+ else
+ ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls));
+
+ tun = &nfp_flow->tun;
+ tun->payload.v6_flag = 1;
+ memcpy(tun->payload.dst.dst_ipv6, ipv6->ipv6_src, sizeof(tun->payload.dst.dst_ipv6));
+ memcpy(tun->payload.src.src_ipv6, ipv6->ipv6_dst, sizeof(tun->payload.src.src_ipv6));
+ memcpy(tun->payload.dst_addr, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6->ipv6_src, sizeof(payload.dst_ipv6));
+ memcpy(payload.src_ipv6, ipv6->ipv6_dst, sizeof(payload.src_ipv6));
+ memcpy(payload.common.dst_mac, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
uint8_t *ipv6)
@@ -2401,6 +2533,9 @@ struct nfp_pre_tun_entry {
nfp_mac_idx = (find_entry->mac_index << 8) |
NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
NFP_TUN_PRE_TUN_IDX_BIT;
+ if (nfp_flow->tun.payload.v6_flag != 0)
+ nfp_mac_idx |= NFP_TUN_PRE_TUN_IPV6_BIT;
+
ret = nfp_flower_cmsg_tun_mac_rule(repr->app_fw_flower, &repr->mac_addr,
nfp_mac_idx, true);
if (ret != 0) {
@@ -3263,6 +3398,10 @@ struct nfp_pre_tun_entry {
rte_spinlock_init(&priv->ipv4_off_lock);
LIST_INIT(&priv->ipv4_off_list);
+ /* ipv6 off list */
+ rte_spinlock_init(&priv->ipv6_off_lock);
+ LIST_INIT(&priv->ipv6_off_list);
+
/* neighbor next list */
LIST_INIT(&priv->nn_list);
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index f536da2..a6994e0 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -177,6 +177,12 @@ struct nfp_ipv4_addr_entry {
int ref_count;
};
+struct nfp_ipv6_addr_entry {
+ LIST_ENTRY(nfp_ipv6_addr_entry) next;
+ uint8_t ipv6_addr[16];
+ int ref_count;
+};
+
#define NFP_TUN_PRE_TUN_RULE_LIMIT 32
struct nfp_flow_priv {
@@ -201,6 +207,9 @@ struct nfp_flow_priv {
/* IPv4 off */
LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
+ /* IPv6 off */
+ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
+ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
/* neighbor next */
LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 09/25] net/nfp: support IPv4 VXLAN decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (7 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 08/25] net/nfp: prepare for IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 10/25] net/nfp: support IPv6 " Chaoyong He
` (17 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 75 +++++++++++++++++++++++++++++++++++++---
2 files changed, 71 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index ff97787..6788520 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -56,4 +56,5 @@ set_mac_src = Y
set_tp_dst = Y
set_tp_src = Y
set_ttl = Y
+vxlan_decap = Y
vxlan_encap = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ad484b9..ce61446 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -413,7 +413,7 @@ struct nfp_pre_tun_entry {
return 0;
}
-__rte_unused static int
+static int
nfp_tun_add_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
{
@@ -908,6 +908,9 @@ struct nfp_pre_tun_entry {
key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1315,7 +1318,7 @@ struct nfp_pre_tun_entry {
}
static int
-nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1323,6 +1326,7 @@ struct nfp_pre_tun_entry {
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
const struct rte_vxlan_hdr *hdr;
struct nfp_flower_ipv4_udp_tun *tun4;
struct nfp_flower_ipv6_udp_tun *tun6;
@@ -1351,6 +1355,8 @@ struct nfp_pre_tun_entry {
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = hdr->vx_vni;
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
vxlan_end:
@@ -1360,7 +1366,7 @@ struct nfp_pre_tun_entry {
else
*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
- return 0;
+ return ret;
}
/* Graph of supported items and associated process function */
@@ -2049,7 +2055,7 @@ struct nfp_pre_tun_entry {
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -2427,7 +2433,7 @@ struct nfp_pre_tun_entry {
return true;
}
-__rte_unused static int
+static int
nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
uint16_t *index)
{
@@ -2570,6 +2576,49 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
+ __rte_unused const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ uint16_t nfp_mac_idx = 0;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_app_fw_flower *app_fw_flower;
+
+ ret = nfp_pre_tun_table_check_add(repr, &nfp_mac_idx);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Pre tunnel table add failed");
+ return -EINVAL;
+ }
+
+ nfp_mac_idx = (nfp_mac_idx << 8) |
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
+ NFP_TUN_PRE_TUN_IDX_BIT;
+
+ app_fw_flower = repr->app_fw_flower;
+ ret = nfp_flower_cmsg_tun_mac_rule(app_fw_flower, &repr->mac_addr,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send tunnel mac rule failed");
+ return -EINVAL;
+ }
+
+ ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ return -EINVAL;
+ }
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
+ return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
+ else
+ return -ENOTSUP;
+}
+
+static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
@@ -2744,6 +2793,17 @@ struct nfp_pre_tun_entry {
position += sizeof(struct nfp_fl_act_set_tun);
nfp_flow->type = NFP_FLOW_ENCAP;
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ PMD_DRV_LOG(DEBUG, "process RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP");
+ ret = nfp_flow_action_tunnel_decap(representor, action,
+ nfp_flow_meta, nfp_flow);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process tunnel decap");
+ return ret;
+ }
+ nfp_flow->type = NFP_FLOW_DECAP;
+ nfp_flow->install_flag = false;
+ break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
return -ENOTSUP;
@@ -3214,6 +3274,11 @@ struct nfp_pre_tun_entry {
}
switch (tunnel->type) {
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP;
+ *pmd_actions = nfp_action;
+ *num_of_actions = 1;
+ break;
default:
*pmd_actions = NULL;
*num_of_actions = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 10/25] net/nfp: support IPv6 VXLAN decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (8 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 09/25] net/nfp: support IPv4 VXLAN " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 11/25] net/nfp: support IPv4 GENEVE encap " Chaoyong He
` (16 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv6 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ce61446..1abb02b 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -476,7 +476,7 @@ struct nfp_pre_tun_entry {
return 0;
}
-__rte_unused static int
+static int
nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
uint8_t ipv6[])
{
@@ -1352,6 +1352,8 @@ struct nfp_pre_tun_entry {
NFP_FLOWER_LAYER2_TUN_IPV6)) {
tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
tun6->tun_id = hdr->vx_vni;
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = hdr->vx_vni;
@@ -2163,7 +2165,7 @@ struct nfp_pre_tun_entry {
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -2577,7 +2579,7 @@ struct nfp_pre_tun_entry {
static int
nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
- __rte_unused const struct rte_flow_action *action,
+ const struct rte_flow_action *action,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
@@ -2595,6 +2597,8 @@ struct nfp_pre_tun_entry {
nfp_mac_idx = (nfp_mac_idx << 8) |
NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
NFP_TUN_PRE_TUN_IDX_BIT;
+ if (action->conf != NULL)
+ nfp_mac_idx |= NFP_TUN_PRE_TUN_IPV6_BIT;
app_fw_flower = repr->app_fw_flower;
ret = nfp_flower_cmsg_tun_mac_rule(app_fw_flower, &repr->mac_addr,
@@ -2615,7 +2619,7 @@ struct nfp_pre_tun_entry {
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
else
- return -ENOTSUP;
+ return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
}
static int
@@ -2803,6 +2807,8 @@ struct nfp_pre_tun_entry {
}
nfp_flow->type = NFP_FLOW_DECAP;
nfp_flow->install_flag = false;
+ if (action->conf != NULL)
+ nfp_flow->tun.payload.v6_flag = 1;
break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
@@ -3273,6 +3279,9 @@ struct nfp_pre_tun_entry {
return -ENOMEM;
}
+ if (tunnel->is_ipv6)
+ nfp_action->conf = (void *)~0;
+
switch (tunnel->type) {
case RTE_FLOW_ITEM_TYPE_VXLAN:
nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP;
@@ -3300,6 +3309,7 @@ struct nfp_pre_tun_entry {
for (i = 0; i < num_of_actions; i++) {
nfp_action = &pmd_actions[i];
+ nfp_action->conf = NULL;
rte_free(nfp_action);
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 11/25] net/nfp: support IPv4 GENEVE encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (9 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 10/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 12/25] net/nfp: support IPv6 " Chaoyong He
` (15 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 107 +++++++++++++++++++++++++++++++++++++++
2 files changed, 108 insertions(+)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 6788520..95c97c2 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -44,6 +44,7 @@ of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
+raw_encap = Y
port_id = Y
set_ipv4_dscp = Y
set_ipv4_dst = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 1abb02b..faa313e 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -38,6 +38,12 @@ struct vxlan_data {
__VA_ARGS__, RTE_FLOW_ITEM_TYPE_END, \
})
+/* Data length of various conf of raw encap action */
+#define GENEVE_V4_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv4_hdr) + \
+ sizeof(struct rte_udp_hdr) + \
+ sizeof(struct rte_flow_item_geneve))
+
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
/* Bit-mask for fields supported by this PMD. */
@@ -908,6 +914,11 @@ struct nfp_pre_tun_entry {
key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RAW_ENCAP detected");
+ key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
+ key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
+ break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
break;
@@ -2623,6 +2634,88 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint64_t tun_id;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_udp *udp;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_geneve *geneve;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv4 = (const struct rte_flow_item_ipv4 *)(eth + 1);
+ udp = (const struct rte_flow_item_udp *)(ipv4 + 1);
+ geneve = (const struct rte_flow_item_geneve *)(udp + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tun_id = (geneve->vni[0] << 16) | (geneve->vni[1] << 8) | geneve->vni[2];
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GENEVE, tun_id,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_proto = geneve->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv4);
+}
+
+static int
+nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ int ret;
+ size_t act_len;
+ size_t act_pre_size;
+ const struct rte_flow_action_raw_encap *raw_encap;
+
+ raw_encap = action->conf;
+ if (raw_encap->data == NULL) {
+ PMD_DRV_LOG(ERR, "The raw encap action conf is NULL.");
+ return -EINVAL;
+ }
+
+ /* Pre_tunnel action must be the first on action list.
+ * If other actions already exist, they need to be
+ * pushed forward.
+ */
+ act_len = act_data - actions;
+ if (act_len != 0) {
+ act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ memmove(actions + act_pre_size, actions, act_len);
+ }
+
+ switch (raw_encap->size) {
+ case GENEVE_V4_LEN:
+ ret = nfp_flow_action_geneve_encap_v4(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
+static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
@@ -2797,6 +2890,20 @@ struct nfp_pre_tun_entry {
position += sizeof(struct nfp_fl_act_set_tun);
nfp_flow->type = NFP_FLOW_ENCAP;
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_RAW_ENCAP");
+ ret = nfp_flow_action_raw_encap(representor->app_fw_flower,
+ position, action_data, action, nfp_flow_meta,
+ &nfp_flow->tun);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process"
+ " RTE_FLOW_ACTION_TYPE_RAW_ENCAP");
+ return ret;
+ }
+ position += sizeof(struct nfp_fl_act_pre_tun);
+ position += sizeof(struct nfp_fl_act_set_tun);
+ nfp_flow->type = NFP_FLOW_ENCAP;
+ break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "process RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP");
ret = nfp_flow_action_tunnel_decap(representor, action,
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 12/25] net/nfp: support IPv6 GENEVE encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (10 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 11/25] net/nfp: support IPv4 GENEVE encap " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 13/25] net/nfp: support IPv4 GENEVE flow item Chaoyong He
` (14 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv6 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 49 ++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index faa313e..ef19231 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -43,6 +43,10 @@ struct vxlan_data {
sizeof(struct rte_ipv4_hdr) + \
sizeof(struct rte_udp_hdr) + \
sizeof(struct rte_flow_item_geneve))
+#define GENEVE_V6_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv6_hdr) + \
+ sizeof(struct rte_udp_hdr) + \
+ sizeof(struct rte_flow_item_geneve))
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2673,6 +2677,47 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint8_t tos;
+ uint64_t tun_id;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_udp *udp;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_geneve *geneve;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv6 = (const struct rte_flow_item_ipv6 *)(eth + 1);
+ udp = (const struct rte_flow_item_udp *)(ipv6 + 1);
+ geneve = (const struct rte_flow_item_geneve *)(udp + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
+ tun_id = (geneve->vni[0] << 16) | (geneve->vni[1] << 8) | geneve->vni[2];
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GENEVE, tun_id,
+ ipv6->hdr.hop_limits, tos);
+ set_tun->tun_proto = geneve->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv6);
+}
+
+static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2706,6 +2751,10 @@ struct nfp_pre_tun_entry {
ret = nfp_flow_action_geneve_encap_v4(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case GENEVE_V6_LEN:
+ ret = nfp_flow_action_geneve_encap_v6(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 13/25] net/nfp: support IPv4 GENEVE flow item
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (11 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 12/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 14/25] net/nfp: support IPv6 " Chaoyong He
` (13 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv4 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 75 ++++++++++++++++++++++++++++++++++++++--
2 files changed, 74 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 95c97c2..14fb6e0 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -28,6 +28,7 @@ Usage doc = Y
[rte_flow items]
eth = Y
+geneve = Y
ipv4 = Y
ipv6 = Y
port_id = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ef19231..a6adfa5 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -769,6 +769,23 @@ struct nfp_pre_tun_entry {
return -EINVAL;
}
break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GENEVE detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_GENEVE;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -960,12 +977,22 @@ struct nfp_pre_tun_entry {
static bool
nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
{
+ uint32_t key_layer2;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_meta_tci *meta_tci;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
return true;
+ if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META))
+ return false;
+
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
+ key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GENEVE)
+ return true;
+
return false;
}
@@ -1386,6 +1413,39 @@ struct nfp_pre_tun_entry {
return ret;
}
+static int
+nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ struct nfp_flower_ipv4_udp_tun *tun4;
+ const struct rte_flow_item_geneve *spec;
+ const struct rte_flow_item_geneve *mask;
+ const struct rte_flow_item_geneve *geneve;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge geneve: no item->spec!");
+ goto geneve_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ geneve = is_mask ? mask : spec;
+
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+
+geneve_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ return 0;
+}
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
@@ -1474,7 +1534,8 @@ struct nfp_pre_tun_entry {
.merge = nfp_flow_merge_tcp,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
- .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN),
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_GENEVE),
.mask_support = &(const struct rte_flow_item_udp){
.hdr = {
.src_port = RTE_BE16(0xffff),
@@ -1507,6 +1568,15 @@ struct nfp_pre_tun_entry {
.mask_sz = sizeof(struct rte_flow_item_vxlan),
.merge = nfp_flow_merge_vxlan,
},
+ [RTE_FLOW_ITEM_TYPE_GENEVE] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &(const struct rte_flow_item_geneve){
+ .vni = "\xff\xff\xff",
+ },
+ .mask_default = &rte_flow_item_geneve_mask,
+ .mask_sz = sizeof(struct rte_flow_item_geneve),
+ .merge = nfp_flow_merge_geneve,
+ },
};
static int
@@ -1563,7 +1633,8 @@ struct nfp_pre_tun_entry {
static bool
nfp_flow_is_tun_item(const struct rte_flow_item *item)
{
- if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+ if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
+ item->type == RTE_FLOW_ITEM_TYPE_GENEVE)
return true;
return false;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 14/25] net/nfp: support IPv6 GENEVE flow item
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (12 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 13/25] net/nfp: support IPv4 GENEVE flow item Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 15/25] net/nfp: support IPv4 GENEVE decap flow action Chaoyong He
` (12 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv6 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 43 +++++++++++++++++++++++++++++++++++++------
1 file changed, 37 insertions(+), 6 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index a6adfa5..48a47a6 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -771,8 +771,9 @@ struct nfp_pre_tun_entry {
break;
case RTE_FLOW_ITEM_TYPE_GENEVE:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GENEVE detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_GENEVE;
key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
@@ -784,6 +785,17 @@ struct nfp_pre_tun_entry {
* in `struct nfp_flower_ipv4_udp_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for GENEVE tunnel.");
+ return -EINVAL;
}
break;
default:
@@ -1415,7 +1427,7 @@ struct nfp_pre_tun_entry {
static int
nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1423,9 +1435,16 @@ struct nfp_pre_tun_entry {
__rte_unused bool is_outer_layer)
{
struct nfp_flower_ipv4_udp_tun *tun4;
+ struct nfp_flower_ipv6_udp_tun *tun6;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_geneve *spec;
const struct rte_flow_item_geneve *mask;
const struct rte_flow_item_geneve *geneve;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1436,12 +1455,24 @@ struct nfp_pre_tun_entry {
mask = item->mask ? item->mask : proc->mask_default;
geneve = is_mask ? mask : spec;
- tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
- (geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+ tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+ } else {
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+ }
geneve_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
+ } else {
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ }
return 0;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 15/25] net/nfp: support IPv4 GENEVE decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (13 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 14/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 16/25] net/nfp: support IPv6 " Chaoyong He
` (11 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 16 ++++++++++++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 14fb6e0..c017be1 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -45,6 +45,7 @@ of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
+raw_decap = Y
raw_encap = Y
port_id = Y
set_ipv4_dscp = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 48a47a6..223f2e7 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -955,6 +955,9 @@ struct nfp_pre_tun_entry {
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RAW_DECAP detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1426,7 +1429,7 @@ struct nfp_pre_tun_entry {
}
static int
-nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1434,6 +1437,7 @@ struct nfp_pre_tun_entry {
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
struct nfp_flower_ipv4_udp_tun *tun4;
struct nfp_flower_ipv6_udp_tun *tun6;
struct nfp_flower_meta_tci *meta_tci;
@@ -1464,6 +1468,8 @@ struct nfp_pre_tun_entry {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
(geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
geneve_end:
@@ -1474,7 +1480,7 @@ struct nfp_pre_tun_entry {
*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
}
- return 0;
+ return ret;
}
/* Graph of supported items and associated process function */
@@ -3056,6 +3062,7 @@ struct nfp_pre_tun_entry {
nfp_flow->type = NFP_FLOW_ENCAP;
break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
PMD_DRV_LOG(DEBUG, "process RTE_FLOW_ACTION_TYPE_TUNNEL_DECAP");
ret = nfp_flow_action_tunnel_decap(representor, action,
nfp_flow_meta, nfp_flow);
@@ -3546,6 +3553,11 @@ struct nfp_pre_tun_entry {
*pmd_actions = nfp_action;
*num_of_actions = 1;
break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP;
+ *pmd_actions = nfp_action;
+ *num_of_actions = 1;
+ break;
default:
*pmd_actions = NULL;
*num_of_actions = 0;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 16/25] net/nfp: support IPv6 GENEVE decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (14 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 15/25] net/nfp: support IPv4 GENEVE decap flow action Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 17/25] net/nfp: support IPv4 NVGRE encap " Chaoyong He
` (10 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv6 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 223f2e7..c7daa14 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1464,6 +1464,8 @@ struct nfp_pre_tun_entry {
tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
(geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 17/25] net/nfp: support IPv4 NVGRE encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (15 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 16/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 18/25] net/nfp: support IPv6 " Chaoyong He
` (9 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action of IPv4 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 43 ++++++++++++++++++++++++++++++++++++++++
2 files changed, 44 insertions(+)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index c017be1..f6b658b 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -41,6 +41,7 @@ vxlan = Y
[rte_flow actions]
count = Y
drop = Y
+nvgre_encap = Y
of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index c7daa14..00b4c6d 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -47,6 +47,10 @@ struct vxlan_data {
sizeof(struct rte_ipv6_hdr) + \
sizeof(struct rte_udp_hdr) + \
sizeof(struct rte_flow_item_geneve))
+#define NVGRE_V4_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv4_hdr) + \
+ sizeof(struct rte_flow_item_gre) + \
+ sizeof(rte_be32_t)) /* gre key */
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2828,6 +2832,41 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_gre *gre;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv4 = (const struct rte_flow_item_ipv4 *)(eth + 1);
+ gre = (const struct rte_flow_item_gre *)(ipv4 + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_proto = gre->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv4);
+}
+
+static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2865,6 +2904,10 @@ struct nfp_pre_tun_entry {
ret = nfp_flow_action_geneve_encap_v6(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case NVGRE_V4_LEN:
+ ret = nfp_flow_action_nvgre_encap_v4(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 18/25] net/nfp: support IPv6 NVGRE encap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (16 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 17/25] net/nfp: support IPv4 NVGRE encap " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 19/25] net/nfp: prepare for IPv4 GRE tunnel decap " Chaoyong He
` (8 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action of IPv6 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 45 +++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 00b4c6d..fd64da8 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -51,6 +51,10 @@ struct vxlan_data {
sizeof(struct rte_ipv4_hdr) + \
sizeof(struct rte_flow_item_gre) + \
sizeof(rte_be32_t)) /* gre key */
+#define NVGRE_V6_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv6_hdr) + \
+ sizeof(struct rte_flow_item_gre) + \
+ sizeof(rte_be32_t)) /* gre key */
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2867,6 +2871,43 @@ struct nfp_pre_tun_entry {
}
static int
+nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint8_t tos;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_gre *gre;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv6 = (const struct rte_flow_item_ipv6 *)(eth + 1);
+ gre = (const struct rte_flow_item_gre *)(ipv6 + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
+ ipv6->hdr.hop_limits, tos);
+ set_tun->tun_proto = gre->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv6);
+}
+
+static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
char *actions,
@@ -2908,6 +2949,10 @@ struct nfp_pre_tun_entry {
ret = nfp_flow_action_nvgre_encap_v4(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case NVGRE_V6_LEN:
+ ret = nfp_flow_action_nvgre_encap_v6(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 19/25] net/nfp: prepare for IPv4 GRE tunnel decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (17 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 18/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 20/25] net/nfp: prepare for IPv6 " Chaoyong He
` (7 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and logics, prepare for
the decap action of IPv4 GRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 29 +++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 40 +++++++++++++++++++++++++-------
drivers/net/nfp/nfp_flow.h | 3 +++
3 files changed, 63 insertions(+), 9 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 61f2f83..8bca7c2 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -575,6 +575,35 @@ struct nfp_flower_ipv6_udp_tun {
rte_be32_t tun_id;
};
+/*
+ * Flow Frame GRE TUNNEL --> Tunnel details (6W/24B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv4_gre_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ rte_be16_t tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be16_t reserved1;
+ rte_be16_t ethertype;
+ rte_be32_t tun_key;
+ rte_be32_t reserved2;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index fd64da8..01c6e9a 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -564,6 +564,7 @@ struct nfp_pre_tun_entry {
uint32_t key_layer2 = 0;
struct nfp_flower_ipv4_udp_tun *udp4;
struct nfp_flower_ipv6_udp_tun *udp6;
+ struct nfp_flower_ipv4_gre_tun *gre4;
struct nfp_flower_meta_tci *meta_tci;
struct nfp_flower_ext_meta *ext_meta = NULL;
@@ -579,9 +580,15 @@ struct nfp_pre_tun_entry {
sizeof(struct nfp_flower_ipv6_udp_tun));
ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
} else {
- udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv4_udp_tun));
- ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+ gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_gre_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst);
+ } else {
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ }
}
return ret;
@@ -1013,7 +1020,7 @@ struct nfp_pre_tun_entry {
ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
- if (key_layer2 & NFP_FLOWER_LAYER2_GENEVE)
+ if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE))
return true;
return false;
@@ -1102,11 +1109,15 @@ struct nfp_pre_tun_entry {
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv4 *spec;
const struct rte_flow_item_ipv4 *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun;
+ struct nfp_flower_ipv4_gre_tun *ipv4_gre_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
if (spec == NULL) {
@@ -1115,12 +1126,23 @@ struct nfp_pre_tun_entry {
}
hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
- ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
- ipv4_udp_tun->ipv4.src = hdr->src_addr;
- ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_GRE)) {
+ ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+
+ ipv4_gre_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_gre_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_gre_tun->ipv4.src = hdr->src_addr;
+ ipv4_gre_tun->ipv4.dst = hdr->dst_addr;
+ } else {
+ ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+
+ ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_udp_tun->ipv4.src = hdr->src_addr;
+ ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ }
} else {
if (spec == NULL) {
PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index a6994e0..b0c2aaf 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -49,6 +49,9 @@
#define NFP_FL_SC_ACT_POPV 0x6A000000
#define NFP_FL_SC_ACT_NULL 0x00000000
+/* GRE Tunnel flags */
+#define NFP_FL_GRE_FLAG_KEY (1 << 2)
+
/* Action opcodes */
#define NFP_FL_ACTION_OPCODE_OUTPUT 0
#define NFP_FL_ACTION_OPCODE_PUSH_VLAN 1
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 20/25] net/nfp: prepare for IPv6 GRE tunnel decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (18 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 19/25] net/nfp: prepare for IPv4 GRE tunnel decap " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 21/25] net/nfp: support IPv4 NVGRE flow item Chaoyong He
` (6 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and logics, prepare for
the decap action of IPv6 GRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 41 ++++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 49 ++++++++++++++++++++++++--------
2 files changed, 78 insertions(+), 12 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 8bca7c2..a48da67 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -604,6 +604,47 @@ struct nfp_flower_ipv4_gre_tun {
rte_be32_t reserved2;
};
+/*
+ * Flow Frame GRE TUNNEL V6 --> Tunnel details (12W/48B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv6_gre_tun {
+ struct nfp_flower_tun_ipv6 ipv6;
+ rte_be16_t tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be16_t reserved1;
+ rte_be16_t ethertype;
+ rte_be32_t tun_key;
+ rte_be32_t reserved2;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 01c6e9a..26d3854 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -565,6 +565,7 @@ struct nfp_pre_tun_entry {
struct nfp_flower_ipv4_udp_tun *udp4;
struct nfp_flower_ipv6_udp_tun *udp6;
struct nfp_flower_ipv4_gre_tun *gre4;
+ struct nfp_flower_ipv6_gre_tun *gre6;
struct nfp_flower_meta_tci *meta_tci;
struct nfp_flower_ext_meta *ext_meta = NULL;
@@ -576,9 +577,15 @@ struct nfp_pre_tun_entry {
key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
- udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv6_udp_tun));
- ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+ gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_gre_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst);
+ } else {
+ udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_udp_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ }
} else {
if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
@@ -1186,11 +1193,15 @@ struct nfp_pre_tun_entry {
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv6 *spec;
const struct rte_flow_item_ipv6 *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
struct nfp_flower_ipv6_udp_tun *ipv6_udp_tun;
+ struct nfp_flower_ipv6_gre_tun *ipv6_gre_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
if (spec == NULL) {
@@ -1199,15 +1210,29 @@ struct nfp_pre_tun_entry {
}
hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
-
- ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
- RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
- ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
- memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
- sizeof(ipv6_udp_tun->ipv6.ipv6_src));
- memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
- sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_GRE)) {
+ ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+
+ ipv6_gre_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_gre_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_gre_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_gre_tun->ipv6.ipv6_src));
+ memcpy(ipv6_gre_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_gre_tun->ipv6.ipv6_dst));
+ } else {
+ ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+
+ ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_src));
+ memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+ }
} else {
if (spec == NULL) {
PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 21/25] net/nfp: support IPv4 NVGRE flow item
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (19 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 20/25] net/nfp: prepare for IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 22/25] net/nfp: support IPv6 " Chaoyong He
` (5 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv4 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 2 +
drivers/net/nfp/nfp_flow.c | 99 +++++++++++++++++++++++++++++++++++++++-
2 files changed, 99 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index f6b658b..2acc809 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -29,6 +29,8 @@ Usage doc = Y
[rte_flow items]
eth = Y
geneve = Y
+gre = Y
+gre_key = Y
ipv4 = Y
ipv6 = Y
port_id = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 26d3854..0a43bf3 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -820,6 +820,26 @@ struct nfp_pre_tun_entry {
return -EINVAL;
}
break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_GRE;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GRE;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_gre_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE_KEY:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE_KEY detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -1540,6 +1560,62 @@ struct nfp_pre_tun_entry {
return ret;
}
+static int
+nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ __rte_unused const struct rte_flow_item *item,
+ __rte_unused const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ struct nfp_flower_ipv4_gre_tun *tun4;
+
+ /* NVGRE is the only supported GRE tunnel type */
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun4->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun4->ethertype = rte_cpu_to_be_16(0x6558);
+
+ return 0;
+}
+
+static int
+nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ __rte_unused const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ rte_be32_t tun_key;
+ const rte_be32_t *spec;
+ const rte_be32_t *mask;
+ struct nfp_flower_ipv4_gre_tun *tun4;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge gre key: no item->spec!");
+ goto gre_key_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ tun_key = is_mask ? *mask : *spec;
+
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ tun4->tun_key = tun_key;
+ tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+
+gre_key_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
+
+ return 0;
+}
+
+const rte_be32_t nfp_flow_item_gre_key = 0xffffffff;
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
@@ -1580,7 +1656,8 @@ struct nfp_pre_tun_entry {
[RTE_FLOW_ITEM_TYPE_IPV4] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_TCP,
RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_SCTP),
+ RTE_FLOW_ITEM_TYPE_SCTP,
+ RTE_FLOW_ITEM_TYPE_GRE),
.mask_support = &(const struct rte_flow_item_ipv4){
.hdr = {
.type_of_service = 0xff,
@@ -1671,6 +1748,23 @@ struct nfp_pre_tun_entry {
.mask_sz = sizeof(struct rte_flow_item_geneve),
.merge = nfp_flow_merge_geneve,
},
+ [RTE_FLOW_ITEM_TYPE_GRE] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
+ .mask_support = &(const struct rte_flow_item_gre){
+ .c_rsvd0_ver = RTE_BE16(0xa000),
+ .protocol = RTE_BE16(0xffff),
+ },
+ .mask_default = &rte_flow_item_gre_mask,
+ .mask_sz = sizeof(struct rte_flow_item_gre),
+ .merge = nfp_flow_merge_gre,
+ },
+ [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &nfp_flow_item_gre_key,
+ .mask_default = &nfp_flow_item_gre_key,
+ .mask_sz = sizeof(rte_be32_t),
+ .merge = nfp_flow_merge_gre_key,
+ },
};
static int
@@ -1728,7 +1822,8 @@ struct nfp_pre_tun_entry {
nfp_flow_is_tun_item(const struct rte_flow_item *item)
{
if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
- item->type == RTE_FLOW_ITEM_TYPE_GENEVE)
+ item->type == RTE_FLOW_ITEM_TYPE_GENEVE ||
+ item->type == RTE_FLOW_ITEM_TYPE_GRE_KEY)
return true;
return false;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 22/25] net/nfp: support IPv6 NVGRE flow item
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (20 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 21/25] net/nfp: support IPv4 NVGRE flow item Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 23/25] net/nfp: support IPv4 NVGRE decap flow action Chaoyong He
` (4 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv6 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 73 +++++++++++++++++++++++++++++++++++++---------
1 file changed, 59 insertions(+), 14 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 0a43bf3..6f21c86 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -822,8 +822,9 @@ struct nfp_pre_tun_entry {
break;
case RTE_FLOW_ITEM_TYPE_GRE:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_GRE;
key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GRE;
@@ -835,6 +836,17 @@ struct nfp_pre_tun_entry {
* in `struct nfp_flower_ipv4_gre_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_gre_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_gre_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for GRE tunnel.");
+ return -1;
}
break;
case RTE_FLOW_ITEM_TYPE_GRE_KEY:
@@ -1562,38 +1574,59 @@ struct nfp_pre_tun_entry {
static int
nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
__rte_unused const struct rte_flow_item *item,
__rte_unused const struct nfp_flow_item_proc *proc,
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_ipv4_gre_tun *tun4;
+ struct nfp_flower_ipv6_gre_tun *tun6;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
/* NVGRE is the only supported GRE tunnel type */
- tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
- if (is_mask)
- tun4->ethertype = rte_cpu_to_be_16(~0);
- else
- tun4->ethertype = rte_cpu_to_be_16(0x6558);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6) {
+ tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun6->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun6->ethertype = rte_cpu_to_be_16(0x6558);
+ } else {
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun4->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun4->ethertype = rte_cpu_to_be_16(0x6558);
+ }
return 0;
}
static int
nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
- __rte_unused const struct nfp_flow_item_proc *proc,
+ const struct nfp_flow_item_proc *proc,
bool is_mask,
__rte_unused bool is_outer_layer)
{
rte_be32_t tun_key;
const rte_be32_t *spec;
const rte_be32_t *mask;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_ipv4_gre_tun *tun4;
+ struct nfp_flower_ipv6_gre_tun *tun6;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1604,12 +1637,23 @@ struct nfp_pre_tun_entry {
mask = item->mask ? item->mask : proc->mask_default;
tun_key = is_mask ? *mask : *spec;
- tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
- tun4->tun_key = tun_key;
- tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6) {
+ tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+ tun6->tun_key = tun_key;
+ tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ } else {
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ tun4->tun_key = tun_key;
+ tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ }
gre_key_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun);
+ else
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
return 0;
}
@@ -1675,7 +1719,8 @@ struct nfp_pre_tun_entry {
[RTE_FLOW_ITEM_TYPE_IPV6] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_TCP,
RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_SCTP),
+ RTE_FLOW_ITEM_TYPE_SCTP,
+ RTE_FLOW_ITEM_TYPE_GRE),
.mask_support = &(const struct rte_flow_item_ipv6){
.hdr = {
.vtc_flow = RTE_BE32(0x0ff00000),
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 23/25] net/nfp: support IPv4 NVGRE decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (21 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 22/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 24/25] net/nfp: support IPv6 " Chaoyong He
` (3 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action of IPv4 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 8 ++++++--
2 files changed, 7 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 2acc809..6400a7c 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -43,6 +43,7 @@ vxlan = Y
[rte_flow actions]
count = Y
drop = Y
+nvgre_decap = Y
nvgre_encap = Y
of_pop_vlan = Y
of_push_vlan = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 6f21c86..c577278 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1609,7 +1609,7 @@ struct nfp_pre_tun_entry {
}
static int
-nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1617,6 +1617,7 @@ struct nfp_pre_tun_entry {
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
rte_be32_t tun_key;
const rte_be32_t *spec;
const rte_be32_t *mask;
@@ -1646,6 +1647,8 @@ struct nfp_pre_tun_entry {
tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
tun4->tun_key = tun_key;
tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
gre_key_end:
@@ -1655,7 +1658,7 @@ struct nfp_pre_tun_entry {
else
*mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
- return 0;
+ return ret;
}
const rte_be32_t nfp_flow_item_gre_key = 0xffffffff;
@@ -3831,6 +3834,7 @@ struct nfp_pre_tun_entry {
*num_of_actions = 1;
break;
case RTE_FLOW_ITEM_TYPE_GENEVE:
+ case RTE_FLOW_ITEM_TYPE_GRE:
nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP;
*pmd_actions = nfp_action;
*num_of_actions = 1;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 24/25] net/nfp: support IPv6 NVGRE decap flow action
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (22 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 23/25] net/nfp: support IPv4 NVGRE decap flow action Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-22 8:24 ` [PATCH v2 25/25] net/nfp: support new tunnel solution Chaoyong He
` (2 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action of IPv6 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index c577278..4c6cfe4 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1643,6 +1643,8 @@ struct nfp_pre_tun_entry {
tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
tun6->tun_key = tun_key;
tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
tun4->tun_key = tun_key;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v2 25/25] net/nfp: support new tunnel solution
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (23 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 24/25] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-22 8:24 ` Chaoyong He
2022-10-24 15:09 ` Ferruh Yigit
2022-10-24 15:07 ` [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
26 siblings, 1 reply; 88+ messages in thread
From: Chaoyong He @ 2022-10-22 8:24 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
The new version of flower firmware application add the support of
a new tunnel solution.
It changes the structure of tunnel neighbor, and use a feature flag
to indicate which tunnel solution is used.
Add the logic of read extra features from firmware, and store it in
the app private structure.
Adjust the data structure and related logic to make the PMD support
both version of tunnel solutions.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower.c | 14 ++++
drivers/net/nfp/flower/nfp_flower.h | 24 +++++++
drivers/net/nfp/flower/nfp_flower_cmsg.c | 4 ++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 17 +++++
drivers/net/nfp/nfp_flow.c | 118 +++++++++++++++++++++++++------
5 files changed, 157 insertions(+), 20 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 41b0fe2..aa8199d 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -1074,6 +1074,8 @@
nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
{
int ret;
+ int err;
+ uint64_t ext_features;
unsigned int numa_node;
struct nfp_net_hw *pf_hw;
struct nfp_net_hw *ctrl_hw;
@@ -1115,6 +1117,18 @@
goto vnic_cleanup;
}
+ /* Read the extra features */
+ ext_features = nfp_rtsym_read_le(pf_dev->sym_tbl, "_abi_flower_extra_features",
+ &err);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Couldn't read extra features from fw");
+ ret = -EIO;
+ goto pf_cpp_area_cleanup;
+ }
+
+ /* Store the extra features */
+ app_fw_flower->ext_features = ext_features;
+
/* Fill in the PF vNIC and populate app struct */
app_fw_flower->pf_hw = pf_hw;
pf_hw->ctrl_bar = pf_dev->ctrl_bar;
diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index 12a0fb5..ab8876d 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -6,6 +6,21 @@
#ifndef _NFP_FLOWER_H_
#define _NFP_FLOWER_H_
+/* Extra features bitmap. */
+#define NFP_FL_FEATS_GENEVE RTE_BIT64(0)
+#define NFP_FL_NBI_MTU_SETTING RTE_BIT64(1)
+#define NFP_FL_FEATS_GENEVE_OPT RTE_BIT64(2)
+#define NFP_FL_FEATS_VLAN_PCP RTE_BIT64(3)
+#define NFP_FL_FEATS_VF_RLIM RTE_BIT64(4)
+#define NFP_FL_FEATS_FLOW_MOD RTE_BIT64(5)
+#define NFP_FL_FEATS_PRE_TUN_RULES RTE_BIT64(6)
+#define NFP_FL_FEATS_IPV6_TUN RTE_BIT64(7)
+#define NFP_FL_FEATS_VLAN_QINQ RTE_BIT64(8)
+#define NFP_FL_FEATS_QOS_PPS RTE_BIT64(9)
+#define NFP_FL_FEATS_QOS_METER RTE_BIT64(10)
+#define NFP_FL_FEATS_DECAP_V2 RTE_BIT64(11)
+#define NFP_FL_FEATS_HOST_ACK RTE_BIT64(31)
+
/*
* Flower fallback and ctrl path always adds and removes
* 8 bytes of prepended data. Tx descriptors must point
@@ -55,9 +70,18 @@ struct nfp_app_fw_flower {
/* service id of ctrl vnic service */
uint32_t ctrl_vnic_id;
+ /* Flower extra features */
+ uint64_t ext_features;
+
struct nfp_flow_priv *flow_priv;
};
+static inline bool
+nfp_flower_support_decap_v2(const struct nfp_app_fw_flower *app_fw_flower)
+{
+ return app_fw_flower->ext_features & NFP_FL_FEATS_DECAP_V2;
+}
+
int nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev);
int nfp_secondary_init_app_fw_flower(struct nfp_cpp *cpp);
uint16_t nfp_flower_pf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 76815cf..babdd8e 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -263,6 +263,8 @@
}
msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v4);
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ msg_len -= sizeof(struct nfp_flower_tun_neigh_ext);
msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH, msg_len);
memcpy(msg, payload, msg_len);
@@ -292,6 +294,8 @@
}
msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v6);
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ msg_len -= sizeof(struct nfp_flower_tun_neigh_ext);
msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6, msg_len);
memcpy(msg, payload, msg_len);
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index a48da67..04601cb 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -136,6 +136,21 @@ struct nfp_flower_tun_neigh {
};
/*
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | VLAN_TPID | VLAN_ID |
+ * +---------------------------------------------------------------+
+ * 1 | HOST_CTX |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_tun_neigh_ext {
+ rte_be16_t vlan_tpid;
+ rte_be16_t vlan_tci;
+ rte_be32_t host_ctx;
+};
+
+/*
* NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V4
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
* -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
@@ -157,6 +172,7 @@ struct nfp_flower_cmsg_tun_neigh_v4 {
rte_be32_t dst_ipv4;
rte_be32_t src_ipv4;
struct nfp_flower_tun_neigh common;
+ struct nfp_flower_tun_neigh_ext ext;
};
/*
@@ -193,6 +209,7 @@ struct nfp_flower_cmsg_tun_neigh_v6 {
uint8_t dst_ipv6[16];
uint8_t src_ipv6[16];
struct nfp_flower_tun_neigh common;
+ struct nfp_flower_tun_neigh_ext ext;
};
#define NFP_TUN_PRE_TUN_RULE_DEL (1 << 0)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 4c6cfe4..01494e7 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -2384,8 +2384,10 @@ struct nfp_pre_tun_entry {
static int
nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
+ bool exists = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
struct nfp_flow_priv *priv;
@@ -2419,11 +2421,17 @@ struct nfp_pre_tun_entry {
LIST_FOREACH(tmp, &priv->nn_list, next) {
if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
tmp->ref_cnt++;
- return 0;
+ exists = true;
+ break;
}
}
- LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ if (exists) {
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ return 0;
+ } else {
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ }
memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
payload.dst_ipv4 = ipv4->ipv4_src;
@@ -2432,6 +2440,17 @@ struct nfp_pre_tun_entry {
memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
payload.common.port_id = port->in_port;
+ if (nfp_flower_support_decap_v2(app_fw_flower)) {
+ if (meta_tci->tci != 0) {
+ payload.ext.vlan_tci = meta_tci->tci;
+ payload.ext.vlan_tpid = 0x88a8;
+ } else {
+ payload.ext.vlan_tci = 0xffff;
+ payload.ext.vlan_tpid = 0xffff;
+ }
+ payload.ext.host_ctx = nfp_flow_meta->host_ctx_id;
+ }
+
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
@@ -2492,8 +2511,10 @@ struct nfp_pre_tun_entry {
static int
nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
+ bool exists = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
struct nfp_flow_priv *priv;
@@ -2527,11 +2548,17 @@ struct nfp_pre_tun_entry {
LIST_FOREACH(tmp, &priv->nn_list, next) {
if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
tmp->ref_cnt++;
- return 0;
+ exists = true;
+ break;
}
}
- LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ if (exists) {
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ return 0;
+ } else {
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ }
memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
memcpy(payload.dst_ipv6, ipv6->ipv6_src, sizeof(payload.dst_ipv6));
@@ -2540,6 +2567,17 @@ struct nfp_pre_tun_entry {
memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
payload.common.port_id = port->in_port;
+ if (nfp_flower_support_decap_v2(app_fw_flower)) {
+ if (meta_tci->tci != 0) {
+ payload.ext.vlan_tci = meta_tci->tci;
+ payload.ext.vlan_tpid = 0x88a8;
+ } else {
+ payload.ext.vlan_tci = 0xffff;
+ payload.ext.vlan_tpid = 0xffff;
+ }
+ payload.ext.host_ctx = nfp_flow_meta->host_ctx_id;
+ }
+
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
@@ -2557,12 +2595,14 @@ struct nfp_pre_tun_entry {
static int
nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
- struct rte_flow *nfp_flow)
+ struct rte_flow *nfp_flow,
+ bool decap_flag)
{
int ret;
bool flag = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
+ struct nfp_flower_in_port *port;
tun = &nfp_flow->tun;
LIST_FOREACH(tmp, &app_fw_flower->flow_priv->nn_list, next) {
@@ -2590,6 +2630,40 @@ struct nfp_pre_tun_entry {
}
}
+ if (!decap_flag)
+ return 0;
+
+ port = (struct nfp_flower_in_port *)(nfp_flow->payload.unmasked_data +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ if (tmp->payload.v6_flag != 0) {
+ struct nfp_flower_cmsg_tun_neigh_v6 nn_v6;
+ memset(&nn_v6, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(nn_v6.dst_ipv6, tmp->payload.dst.dst_ipv6, sizeof(nn_v6.dst_ipv6));
+ memcpy(nn_v6.src_ipv6, tmp->payload.src.src_ipv6, sizeof(nn_v6.src_ipv6));
+ memcpy(nn_v6.common.dst_mac, tmp->payload.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(nn_v6.common.src_mac, tmp->payload.src_addr, RTE_ETHER_ADDR_LEN);
+ nn_v6.common.port_id = port->in_port;
+
+ ret = nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &nn_v6);
+ } else {
+ struct nfp_flower_cmsg_tun_neigh_v4 nn_v4;
+ memset(&nn_v4, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ nn_v4.dst_ipv4 = tmp->payload.dst.dst_ipv4;
+ nn_v4.src_ipv4 = tmp->payload.src.src_ipv4;
+ memcpy(nn_v4.common.dst_mac, tmp->payload.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(nn_v4.common.src_mac, tmp->payload.src_addr, RTE_ETHER_ADDR_LEN);
+ nn_v4.common.port_id = port->in_port;
+
+ ret = nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &nn_v4);
+ }
+
+ if (ret != 0) {
+ PMD_DRV_LOG(DEBUG, "Failed to send the nn entry");
+ return -EINVAL;
+ }
+
return 0;
}
@@ -2877,12 +2951,14 @@ struct nfp_pre_tun_entry {
goto free_entry;
}
- ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
- nfp_mac_idx, true);
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
- ret = -EINVAL;
- goto free_entry;
+ if (!nfp_flower_support_decap_v2(repr->app_fw_flower)) {
+ ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
}
find_entry->ref_cnt = 1U;
@@ -2933,18 +3009,20 @@ struct nfp_pre_tun_entry {
return -EINVAL;
}
- ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
- nfp_mac_idx, false);
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
- return -EINVAL;
+ if (!nfp_flower_support_decap_v2(app_fw_flower)) {
+ ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ return -EINVAL;
+ }
}
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
- return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
+ return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
else
- return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
+ return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
}
static int
@@ -3654,11 +3732,11 @@ struct nfp_pre_tun_entry {
break;
case NFP_FLOW_ENCAP:
/* Delete the entry from nn table */
- ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow, false);
break;
case NFP_FLOW_DECAP:
/* Delete the entry from nn table */
- ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow, true);
if (ret != 0)
goto exit;
--
1.8.3.1
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (24 preceding siblings ...)
2022-10-22 8:24 ` [PATCH v2 25/25] net/nfp: support new tunnel solution Chaoyong He
@ 2022-10-24 15:07 ` Ferruh Yigit
2022-10-25 3:17 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
26 siblings, 1 reply; 88+ messages in thread
From: Ferruh Yigit @ 2022-10-24 15:07 UTC (permalink / raw)
To: Chaoyong He; +Cc: oss-drivers, niklas.soderlund, dev
On 10/22/2022 9:24 AM, Chaoyong He wrote:
> This is the third patch series to add the support of rte_flow offload for
> nfp PMD, includes:
> Add the offload support of decap/encap of VXLAN
> Add the offload support of decap/encap of GENEVE
> Add the offload support of decap/encap of NVGRE
>
> Changes since v1
> - Delete the modificaiton of release note.
> - Modify the commit title.
> - Rebase to the lastest logic.
>
> Chaoyong He (25):
> net/nfp: support IPv4 VXLAN flow item
> net/nfp: support IPv6 VXLAN flow item
> net/nfp: prepare for IPv4 tunnel encap flow action
> net/nfp: prepare for IPv6 tunnel encap flow action
> net/nfp: support IPv4 VXLAN encap flow action
> net/nfp: support IPv6 VXLAN encap flow action
> net/nfp: prepare for IPv4 UDP tunnel decap flow action
> net/nfp: prepare for IPv6 UDP tunnel decap flow action
> net/nfp: support IPv4 VXLAN decap flow action
> net/nfp: support IPv6 VXLAN decap flow action
> net/nfp: support IPv4 GENEVE encap flow action
> net/nfp: support IPv6 GENEVE encap flow action
> net/nfp: support IPv4 GENEVE flow item
> net/nfp: support IPv6 GENEVE flow item
> net/nfp: support IPv4 GENEVE decap flow action
> net/nfp: support IPv6 GENEVE decap flow action
> net/nfp: support IPv4 NVGRE encap flow action
> net/nfp: support IPv6 NVGRE encap flow action
> net/nfp: prepare for IPv4 GRE tunnel decap flow action
> net/nfp: prepare for IPv6 GRE tunnel decap flow action
> net/nfp: support IPv4 NVGRE flow item
> net/nfp: support IPv6 NVGRE flow item
> net/nfp: support IPv4 NVGRE decap flow action
> net/nfp: support IPv6 NVGRE decap flow action
> net/nfp: support new tunnel solution
>
Hi Chaoyong,
'./devtools/check-doc-vs-code.sh' tools reports some inconsistency, can
you please fix it?
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH v2 25/25] net/nfp: support new tunnel solution
2022-10-22 8:24 ` [PATCH v2 25/25] net/nfp: support new tunnel solution Chaoyong He
@ 2022-10-24 15:09 ` Ferruh Yigit
2022-10-25 1:44 ` Chaoyong He
0 siblings, 1 reply; 88+ messages in thread
From: Ferruh Yigit @ 2022-10-24 15:09 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund
On 10/22/2022 9:24 AM, Chaoyong He wrote:
> The new version of flower firmware application add the support of
> a new tunnel solution.
>
> It changes the structure of tunnel neighbor, and use a feature flag
> to indicate which tunnel solution is used.
>
> Add the logic of read extra features from firmware, and store it in
> the app private structure.
>
> Adjust the data structure and related logic to make the PMD support
> both version of tunnel solutions.
>
> Signed-off-by: Chaoyong He<chaoyong.he@corigine.com>
> Reviewed-by: Niklas Söderlund<niklas.soderlund@corigine.com>
> ---
> drivers/net/nfp/flower/nfp_flower.c | 14 ++++
> drivers/net/nfp/flower/nfp_flower.h | 24 +++++++
> drivers/net/nfp/flower/nfp_flower_cmsg.c | 4 ++
> drivers/net/nfp/flower/nfp_flower_cmsg.h | 17 +++++
> drivers/net/nfp/nfp_flow.c | 118 +++++++++++++++++++++++++------
> 5 files changed, 157 insertions(+), 20 deletions(-)
>
> diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
> index 41b0fe2..aa8199d 100644
> --- a/drivers/net/nfp/flower/nfp_flower.c
> +++ b/drivers/net/nfp/flower/nfp_flower.c
> @@ -1074,6 +1074,8 @@
> nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
> {
> int ret;
> + int err;
> + uint64_t ext_features;
> unsigned int numa_node;
> struct nfp_net_hw *pf_hw;
> struct nfp_net_hw *ctrl_hw;
> @@ -1115,6 +1117,18 @@
> goto vnic_cleanup;
> }
>
> + /* Read the extra features */
> + ext_features = nfp_rtsym_read_le(pf_dev->sym_tbl, "_abi_flower_extra_features",
> + &err);
> + if (err != 0) {
> + PMD_INIT_LOG(ERR, "Couldn't read extra features from fw");
> + ret = -EIO;
> + goto pf_cpp_area_cleanup;
> + }
Hi Chaoyong,
It looks like there are two flavor of the flower firmware application,
one with 'extra_features' other without it.
Does this worth documenting in the driver documentation and the release
notes?
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [PATCH v2 25/25] net/nfp: support new tunnel solution
2022-10-24 15:09 ` Ferruh Yigit
@ 2022-10-25 1:44 ` Chaoyong He
2022-10-25 8:18 ` Ferruh Yigit
0 siblings, 1 reply; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 1:44 UTC (permalink / raw)
To: Ferruh Yigit, dev; +Cc: oss-drivers, Niklas Soderlund
> On 10/22/2022 9:24 AM, Chaoyong He wrote:
> > The new version of flower firmware application add the support of a
> > new tunnel solution.
> >
> > It changes the structure of tunnel neighbor, and use a feature flag to
> > indicate which tunnel solution is used.
> >
> > Add the logic of read extra features from firmware, and store it in
> > the app private structure.
> >
> > Adjust the data structure and related logic to make the PMD support
> > both version of tunnel solutions.
> >
> > Signed-off-by: Chaoyong He<chaoyong.he@corigine.com>
> > Reviewed-by: Niklas Söderlund<niklas.soderlund@corigine.com>
> > ---
> > drivers/net/nfp/flower/nfp_flower.c | 14 ++++
> > drivers/net/nfp/flower/nfp_flower.h | 24 +++++++
> > drivers/net/nfp/flower/nfp_flower_cmsg.c | 4 ++
> > drivers/net/nfp/flower/nfp_flower_cmsg.h | 17 +++++
> > drivers/net/nfp/nfp_flow.c | 118 +++++++++++++++++++++++++-
> -----
> > 5 files changed, 157 insertions(+), 20 deletions(-)
> >
> > diff --git a/drivers/net/nfp/flower/nfp_flower.c
> > b/drivers/net/nfp/flower/nfp_flower.c
> > index 41b0fe2..aa8199d 100644
> > --- a/drivers/net/nfp/flower/nfp_flower.c
> > +++ b/drivers/net/nfp/flower/nfp_flower.c
> > @@ -1074,6 +1074,8 @@
> > nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
> > {
> > int ret;
> > + int err;
> > + uint64_t ext_features;
> > unsigned int numa_node;
> > struct nfp_net_hw *pf_hw;
> > struct nfp_net_hw *ctrl_hw;
> > @@ -1115,6 +1117,18 @@
> > goto vnic_cleanup;
> > }
> >
> > + /* Read the extra features */
> > + ext_features = nfp_rtsym_read_le(pf_dev->sym_tbl,
> "_abi_flower_extra_features",
> > + &err);
> > + if (err != 0) {
> > + PMD_INIT_LOG(ERR, "Couldn't read extra features from fw");
> > + ret = -EIO;
> > + goto pf_cpp_area_cleanup;
> > + }
>
> Hi Chaoyong,
>
> It looks like there are two flavor of the flower firmware application, one with
> 'extra_features' other without it.
> Does this worth documenting in the driver documentation and the release
> notes?
Actually, it's just two different methods to process the tunnel decap action in the flower
firmware application.
The old version flower firmware application needs 'tunnel neighbor' and 'pre-tunnel' table
to get needed information to decap the tunnel packet.
While the new version flower firmware application extends the 'tunnel neighbor' table and
does not need 'pre-tunnel' table anymore when decap the tunnel packet.
The app which use the rte_flow know nothing about this difference.
So, should we still explain this in the documentation and the release notes? I'm not quite sure
about how details should we expose in these documents.
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD
2022-10-24 15:07 ` [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
@ 2022-10-25 3:17 ` Chaoyong He
2022-10-25 3:29 ` Chaoyong He
0 siblings, 1 reply; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 3:17 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: oss-drivers, Niklas Soderlund, dev
[-- Attachment #1: Type: text/plain, Size: 2187 bytes --]
> On 10/22/2022 9:24 AM, Chaoyong He wrote:
> > This is the third patch series to add the support of rte_flow offload
> > for nfp PMD, includes:
> > Add the offload support of decap/encap of VXLAN Add the offload
> > support of decap/encap of GENEVE Add the offload support of
> > decap/encap of NVGRE
> >
> > Changes since v1
> > - Delete the modificaiton of release note.
> > - Modify the commit title.
> > - Rebase to the lastest logic.
> >
> > Chaoyong He (25):
> > net/nfp: support IPv4 VXLAN flow item
> > net/nfp: support IPv6 VXLAN flow item
> > net/nfp: prepare for IPv4 tunnel encap flow action
> > net/nfp: prepare for IPv6 tunnel encap flow action
> > net/nfp: support IPv4 VXLAN encap flow action
> > net/nfp: support IPv6 VXLAN encap flow action
> > net/nfp: prepare for IPv4 UDP tunnel decap flow action
> > net/nfp: prepare for IPv6 UDP tunnel decap flow action
> > net/nfp: support IPv4 VXLAN decap flow action
> > net/nfp: support IPv6 VXLAN decap flow action
> > net/nfp: support IPv4 GENEVE encap flow action
> > net/nfp: support IPv6 GENEVE encap flow action
> > net/nfp: support IPv4 GENEVE flow item
> > net/nfp: support IPv6 GENEVE flow item
> > net/nfp: support IPv4 GENEVE decap flow action
> > net/nfp: support IPv6 GENEVE decap flow action
> > net/nfp: support IPv4 NVGRE encap flow action
> > net/nfp: support IPv6 NVGRE encap flow action
> > net/nfp: prepare for IPv4 GRE tunnel decap flow action
> > net/nfp: prepare for IPv6 GRE tunnel decap flow action
> > net/nfp: support IPv4 NVGRE flow item
> > net/nfp: support IPv6 NVGRE flow item
> > net/nfp: support IPv4 NVGRE decap flow action
> > net/nfp: support IPv6 NVGRE decap flow action
> > net/nfp: support new tunnel solution
> >
>
> Hi Chaoyong,
>
> './devtools/check-doc-vs-code.sh' tools reports some inconsistency, can you
> please fix it?
Sorry, I can't quite understand the logic of this script.
I'm quite sure I have listed the items and actions we supported in the file 'doc/guides/nics/features/nfp.ini'.
And seems it report for every nic which support rte_flow?
[-- Attachment #2: full.log --]
[-- Type: application/octet-stream, Size: 32162 bytes --]
rte_flow doc out of sync for bnxt
item (typeof(item->type))bnxt_last)
item bnxt_end) {
item case any:
item case eth:
item case gre:
item case ipv4:
item case ipv6:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item case port_id: {
item case port_representor: {
item case represented_port: {
item if (tunnel->type !
item } else if (item->type
item any
item eth
item gre
item icmp
item icmp6
item ipv4
item ipv6
item port_id
item port_representor
item represented_port
item tcp
item udp
item vlan
item vxlan
action (typeof(action_item->type))bnxt_last)
action (typeof(action_item->type))bnxt_end) {
action bnxt_vxlan_decap) {
action case rss:
action action_item.type
action case count:
action case drop:
action case mark:
action case port_id: {
action case port_representor: {
action case represented_port: {
action case rss:
action case vf:
action count
action dec_ttl
action drop
action jump
action mark
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action pf
action port_id
action port_representor
action represented_port
action rss
action sample
action set_ipv4_dst
action set_ipv4_src
action set_tp_dst
action set_tp_src
action vf
action vxlan_decap
action vxlan_encap
rte_flow doc out of sync for cnxk
item any
item arp_eth_ipv4
item e_tag
item esp
item eth
item geneve
item gre
item gre_key
item gtpc
item gtpu
item higig2
item icmp
item ipv4
item ipv6
item ipv6_ext
item ipv6_frag_ext
item mark
item mpls
item nvgre
item raw
item sctp
item tcp
item udp
item vlan
item vxlan
item vxlan_gpe
action meter) {
action actions->type !
action case count:
action case drop:
action case flag:
action case mark:
action case meter:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case pf:
action case port_id:
action case queue:
action case represented_port:
action case rss:
action case security:
action case vf:
action drop)
action * flag vs rte_flow_action_type_mark
action * mark.
action if (action->type !
action if (action_fate_green
action if (action_fate_red
action if (action_fate_yellow
action count
action drop
action flag
action mark
action meter
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action pf
action port_id
action queue
action represented_port
action rss
action security
action vf
rte_flow doc out of sync for cxgbe
item eth);
item ipv4);
item ipv6);
item tcp);
item "no eth "
item "no eth found");
item "no ipv4 "
item "no ipv6 "
item "no tcp or "
item "udp found");
item udp);
item i->type !
item eth
item ipv4
item ipv6
item tcp
item udp
item vlan
action case count:
action case drop:
action case mac_swap:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case queue:
action case set_ipv4_dst:
action case set_ipv4_src:
action case set_ipv6_dst:
action case set_ipv6_src:
action case set_mac_dst:
action case set_mac_src:
action case set_tp_dst:
action case set_tp_src:
action case mac_swap:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case set_ipv4_dst:
action case set_ipv4_src:
action case set_ipv6_dst:
action case set_ipv6_src:
action case set_mac_dst:
action case set_mac_src:
action case set_tp_dst:
action case set_tp_src:
action if (action->type !
action count
action drop
action mac_swap
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action queue
action set_ipv4_dst
action set_ipv4_src
action set_ipv6_dst
action set_ipv6_src
action set_mac_dst
action set_mac_src
action set_tp_dst
action set_tp_src
rte_flow doc out of sync for dpaa2
item eth);
item eth);
item eth);
item ipv4)) {
item ipv6)) {
item case eth:
item case gre:
item case icmp:
item case ipv4:
item case ipv6:
item case raw:
item case sctp:
item case tcp:
item case udp:
item case vlan:
item eth)) {
item gre)) {
item icmp)) {
item proto.type
item sctp)) {
item tcp)) {
item udp)) {
item vlan)) {
item case eth:
item case gre:
item case icmp:
item case ipv4:
item case ipv6:
item case raw:
item case sctp:
item case tcp:
item case udp:
item case vlan:
item eth,
item gre,
item icmp,
item if (proto.type
item if (type
item ipv4,
item ipv6,
item proto.type
item sctp,
item tcp,
item udp,
item vlan,
item #define dpaa2_flow_item_type_generic_ip (meta + 1)
item eth
item gre
item icmp
item ipv4
item ipv6
item meta
item raw
item sctp
item tcp
item udp
item vlan
action flow->action
action case port_id:
action case queue:
action case represented_port:
action case rss:
action case port_id:
action case queue:
action case represented_port:
action case rss:
action if (action->type
action port_id,
action queue,
action represented_port,
action rss
action } else if (action->type
action drop
action port_id
action queue
action represented_port
action rss
rte_flow doc out of sync for e1000
item item->type !
item item->type
item } else if (item->type
item eth
item ipv4
item ipv6
item raw
item sctp
item tcp
item udp
action act->type !
action if (act->type
action drop
action queue
action rss
rte_flow doc out of sync for enic
item eth,
item geneve,
item ipv4,
item ipv6,
item udp,
item vlan,
item vxlan,
item if (item->type
item case gtp:
item case gtpc:
item case gtpu:
item case ipv4:
item case ipv6:
item if (item->type
item ecpri
item eth
item geneve
item geneve_opt
item gtp
item gtpc
item gtpu
item ipv4
item ipv6
item raw
item sctp
item tcp
item udp
item vlan
item vxlan
action case count:
action case count: {
action case drop: {
action case flag: {
action case jump: {
action case mark: {
action case of_pop_vlan: {
action case of_push_vlan: {
action case of_set_vlan_pcp: {
action case of_set_vlan_vid: {
action case passthru: {
action case port_id: {
action case port_representor: {
action case queue: {
action case represented_port: {
action case rss: {
action case vxlan_decap: {
action case vxlan_encap: {
action count,
action drop,
action flag,
action jump,
action mark,
action of_pop_vlan,
action of_push_vlan,
action of_set_vlan_pcp,
action of_set_vlan_vid,
action passthru,
action port_id,
action queue,
action rss,
action vxlan_decap,
action vxlan_encap,
action count
action drop
action flag
action jump
action mark
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action passthru
action port_id
action port_representor
action queue
action represented_port
action rss
action vxlan_decap
action vxlan_encap
rte_flow doc out of sync for hinic
item item->type !
item if (item->type
item any
item eth
item icmp
item icmp6
item ipv4
item ipv6
item tcp
item udp
item vxlan
action act->type !
action if (act->type
action drop
action queue
rte_flow doc out of sync for hns3
item if (pattern->type
item case eth:
item case geneve:
item case ipv4:
item case ipv6:
item case nvgre:
item case sctp:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item case vxlan_gpe:
item eth,
item geneve,
item icmp
item icmp,
item if (item->type
item if (type
item ipv4,
item ipv6
item ipv6,
item nvgre,
item sctp,
item tcp,
item udp,
item vlan
item vlan,
item vxlan,
item vxlan_gpe
item eth
item geneve
item icmp
item ipv4
item ipv6
item nvgre
item sctp
item tcp
item udp
item vlan
item vxlan
item vxlan_gpe
action case count:
action case drop:
action case flag:
action case mark:
action case queue:
action case rss:
action if (actions->type
action if (action->type !
action count
action drop
action flag
action mark
action queue
action rss
rte_flow doc out of sync for i40e
item bit_ull(ah))
item bit_ull(esp) | \
item bit_ull(gtpc))
item bit_ull(ipv6) | \
item bit_ull(ipv6))
item bit_ull(ipv6_frag_ext))
item bit_ull(l2tpv3oip) |\
item bit_ull(sctp) | \
item bit_ull(udp) | \
item bit_ull(vlan))
item if (l3
item } else if (l3
item if (l3
item if (next_type
item } else if (l3
item else if (item_type
item else if (l3
item if (item_type
item } else if (l3
item case esp:
item case eth:
item case gre:
item case gtpc:
item case gtpu:
item case ipv4:
item case ipv6:
item case l2tpv3oip:
item case mpls:
item case nvgre:
item case raw:
item case sctp:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item if (item->last && item->type !
item if (last_item_type
item case esp:
item case gtpc:
item case gtpu:
item case l2tpv3oip:
item esp,
item eth,
item gre,
item gtpc,
item gtpu,
item if (pattern_hdrs && last_item_type !
item ipv4,
item ipv6,
item l2tpv3oip,
item mpls,
item nvgre,
item raw,
item sctp,
item tcp,
item udp,
item vlan,
item vxlan,
item #define i40e_hash_eth_next_allow (bit_ull(ipv4) | \
item #define i40e_hash_gtpu_next_allow (bit_ull(ipv4) | \
item #define i40e_hash_ip_next_allow (bit_ull(tcp) | \
item #define i40e_hash_udp_next_allow (bit_ull(gtpu) | \
item #define i40e_hash_void_next_allow bit_ull(eth)
item ah
item esp
item eth
item gre
item gtpc
item gtpu
item ipv4
item ipv6
item ipv6_frag_ext
item l2tpv3oip
item mpls
item nvgre
item raw
item sctp
item tcp
item udp
item vlan
item vxlan
action case rss:
action case drop:
action case flag:
action case mark:
action case passthru:
action case queue:
action case rss:
action if ((actions + i)->type
action drop
action flag
action mark
action passthru
action pf
action queue
action rss
action vf
rte_flow doc out of sync for iavf
item ipv6_frag_ext)) {
item else if (l3
item if (l3
item case ah:
item case ecpri:
item case esp:
item case eth:
item case gre:
item case gtp_psc:
item case gtpu:
item case ipv4:
item case ipv6:
item case ipv6_frag_ext:
item case l2tpv2:
item case l2tpv3oip:
item case pfcp:
item case ppp:
item case raw:
item case raw: {
item case sctp:
item case tcp:
item case udp:
item case vlan:
item if (item->last && !(item_type
item ah,
item arp_eth_ipv4,
item ecpri,
item esp,
item eth,
item gre,
item gtp_psc,
item gtpc,
item gtpu,
item icmp,
item icmp6,
item ipv4,
item ipv6,
item ipv6_frag_ext,
item l2tpv2,
item l2tpv3oip,
item pfcp,
item ppp,
item raw,
item sctp,
item tcp,
item udp,
item vlan,
item ah
item arp_eth_ipv4
item ecpri
item esp
item eth
item gre
item gtp_psc
item gtpc
item gtpu
item icmp
item icmp6
item ipv4
item ipv6
item ipv6_frag_ext
item l2tpv2
item l2tpv3oip
item pfcp
item ppp
item raw
item sctp
item tcp
item udp
item vlan
action case count:
action case drop:
action case mark:
action case passthru:
action case port_representor:
action case queue:
action case rss:
action if (act->type !
action count
action drop
action mark
action passthru
action port_representor
action queue
action rss
action security
rte_flow doc out of sync for ice
item ipv6_frag_ext)) {
item l4
item if (l3
item if (l4
item l4
item l4
item } else if (l3
item item->type
item case ah:
item case any:
item case esp:
item case eth:
item case gtp_psc:
item case gtpu:
item case ipv4:
item case ipv6:
item case ipv6_frag_ext:
item case l2tpv3oip:
item case nvgre:
item case pfcp:
item case pppoe_proto_id:
item case pppoed:
item case pppoes:
item case raw:
item case raw: {
item case sctp:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item if (item->last && !(item_type
item ah,
item any,
item arp_eth_ipv4,
item esp,
item eth,
item gtp_psc,
item gtpu,
item icmp,
item icmp6,
item ipv4,
item ipv6,
item ipv6_frag_ext,
item l2tpv3oip,
item nvgre,
item pfcp,
item pppoe_proto_id,
item pppoed,
item pppoes,
item raw,
item sctp,
item tcp,
item udp,
item vlan,
item vxlan,
item ah
item any
item arp_eth_ipv4
item esp
item eth
item gtp_psc
item gtpu
item icmp
item icmp6
item ipv4
item ipv6
item ipv6_frag_ext
item l2tpv3oip
item nvgre
item pfcp
item pppoe_proto_id
item pppoed
item pppoes
item raw
item sctp
item tcp
item udp
item vlan
item vxlan
action case count:
action case drop:
action case mark:
action case passthru:
action case port_representor:
action case queue:
action case represented_port:
action case rss:
action if (act->type !
action count
action drop
action mark
action passthru
action port_representor
action queue
action represented_port
action rss
rte_flow doc out of sync for igc
item eth
item ipv4
item ipv6
item tcp
item udp
action case queue:
action case rss:
action queue
action rss
rte_flow doc out of sync for ipn3ke
item eth),
item ipv4,
item mpls),
item nvgre),
item tcp),
item udp),
item udp,
item vlan),
item case eth:
item case ipv4:
item case mpls:
item case nvgre:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item * eth
item * ipv4
item * mpls
item * nvgre
item * tcp
item * udp
item * vlan
item * vxlan
item eth
item ipv4
item mpls
item nvgre
item tcp
item udp
item vlan
item vxlan
action case drop:
action case mark:
action drop
action mark
rte_flow doc out of sync for ixgbe
item item->type
item item->type !
item item->type !
item item->type !
item (item->type !
item if (item->type
item if (next->type !
item while (item->type !
item item->type
item if ((item->type !
item e_tag
item eth
item fuzzy
item ipv4
item ipv6
item nvgre
item raw
item sctp
item tcp
item udp
item vlan
item vxlan
action act->type !
action act->type !
action * special case for flow action type security
action if ((act->type !
action * special case for flow action type security.
action drop
action mark
action pf
action queue
action rss
action security
action vf
rte_flow doc out of sync for mlx4
item ipv4),
item tcp),
item .type
item .type
item if ((item->type !
item eth
item ipv4
item tcp
item udp
item vlan
action .type
action case drop:
action case queue:
action case rss:
action drop
action queue
action rss
rte_flow doc out of sync for mlx5
item mlx5_vlan;
item .type
item mlx5_tag,
item mlx5_tunnel) {
item mlx5_tag,
item mlx5_tag;
item mlx5_tx_queue,
item mlx5_tunnel;
item *all_ports
item if (items->type
item node->type
item eth : rte_flow_item_type_end;
item vlan : rte_flow_item_type_end;
item mlx5_tag;
item item->type
item type
item case conntrack:
item case ecpri:
item case esp:
item case eth:
item case flex:
item case geneve:
item case geneve_opt:
item case gre:
item case gre_key:
item case gre_option:
item case gtp:
item case gtp_psc:
item case icmp6:
item case icmp:
item case integrity:
item case ipv4:
item case ipv6:
item case ipv6_frag_ext:
item case mark:
item case meta:
item case mlx5_tag:
item case mlx5_tunnel:
item case mlx5_tx_queue:
item case mpls:
item case nvgre:
item case port_id:
item case represented_port:
item case tag:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item case vxlan_gpe:
item if (item && item->type
item if (tunnel_item->type
item if (type
item mlx5_assert(prev_item->type
item item->type !
item * represented_port and is meaningless without them.
item case esp:
item case eth:
item case geneve:
item case gre:
item case gre_key:
item case gtp:
item case icmp6:
item case icmp:
item case ipv4:
item case ipv6:
item case ipv6_frag_ext:
item case mpls:
item case nvgre:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item case vxlan_gpe:
item dport
item esw_mgr_port
item if (item->type
item if (item->type !
item item.type
item mlx5_assert(item->type
item mlx5_end
item mlx5_tag,
item mlx5_tunnel,
item mlx5_tx_queue,
item mlx5_vlan,
item conntrack
item ecpri
item esp
item eth
item flex
item geneve
item geneve_opt
item gre
item gre_key
item gre_option
item gtp
item gtp_psc
item icmp
item icmp6
item integrity
item ipv4
item ipv6
item ipv6_frag_ext
item mark
item meta
item mpls
item nvgre
item port_id
item represented_port
item tag
item tcp
item udp
item vlan
item vxlan
item vxlan_gpe
action mlx5_count;
action mlx5_tag;
action of_set_vlan_pcp);
action of_set_vlan_vid);
action .type
action conntrack;
action inc_tcp_ack ?
action inc_tcp_seq ?
action mlx5_age;
action mlx5_copy_mreg,
action sample,
action set_ipv4_src ?
action set_ipv6_src ?
action set_mac_src ?
action set_tp_src ?
action set_ttl ?
action mlx5_jump;
action mlx5_tunnel_set) {
action .type
action if (act->type
action jump,
action mlx5_copy_mreg,
action mlx5_default_miss,
action mlx5_mark,
action mlx5_tag,
action mlx5_tag;
action rss;
action mlx5_tunnel_set;
action mlx5_tag;
action case drop:
action case jump:
action case mark:
action case meter:
action case modify_field:
action case nvgre_encap:
action case of_push_vlan:
action case of_set_vlan_vid:
action case port_id:
action case queue:
action case raw_decap:
action case raw_encap:
action case represented_port:
action case rss:
action case set_tag:
action case vxlan_encap:
action if (action->type !
action if (actions->type
action if (actions->type !
action if (ptr->type
action (enum rte_flow_action_type)mlx5_rss,
action (action + 1, of_set_vlan_pcp)))
action (action + 1, of_set_vlan_vid)))
action (enum rte_flow_action_type)mlx5_count;
action .type
action act_data.type
action case age:
action case conntrack:
action case count:
action case dec_tcp_ack:
action case dec_tcp_seq:
action case dec_ttl:
action case drop:
action case flag:
action case inc_tcp_ack:
action case inc_tcp_seq:
action case jump:
action case mark:
action case meter:
action case mlx5_age:
action case mlx5_copy_mreg:
action case mlx5_count:
action case mlx5_default_miss:
action case mlx5_jump:
action case mlx5_mark:
action case mlx5_rss:
action case mlx5_tag:
action case mlx5_tunnel_set:
action case modify_field:
action case nvgre_decap:
action case nvgre_encap:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case port_id:
action case queue:
action case raw_decap:
action case raw_encap:
action case represented_port:
action case rss:
action case sample:
action case set_ipv4_dscp:
action case set_ipv4_dst:
action case set_ipv4_src:
action case set_ipv6_dscp:
action case set_ipv6_dst:
action case set_ipv6_src:
action case set_mac_dst:
action case set_mac_src:
action case set_meta:
action case set_tag:
action case set_tp_dst:
action case set_tp_src:
action case set_ttl:
action case vxlan_decap:
action case vxlan_encap:
action if (qrss->type
action } else if (qrss->type
action /* count mlx5_tag. */
action case age:
action case conntrack:
action case count:
action case dec_tcp_ack:
action case dec_tcp_seq:
action case dec_ttl:
action case flag:
action case inc_tcp_ack:
action case inc_tcp_seq:
action case mark:
action case mlx5_rss:
action case modify_field:
action case nvgre_decap:
action case nvgre_encap:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case port_id:
action case port_id: {
action case queue:
action case raw_decap:
action case raw_encap:
action case represented_port:
action case represented_port: {
action case rss:
action case set_ipv4_dscp:
action case set_ipv4_dst:
action case set_ipv4_src:
action case set_ipv6_dscp:
action case set_ipv6_dst:
action case set_ipv6_src:
action case set_mac_dst:
action case set_mac_src:
action case set_meta:
action case set_tag:
action case set_tp_dst:
action case set_tp_src:
action case set_ttl:
action case vxlan_decap:
action case vxlan_encap:
action for (; app_actions->type !
action if (action->type !
action } else if (action->type
action * see @count
action age
action conntrack
action count
action dec_tcp_ack
action dec_tcp_seq
action dec_ttl
action drop
action flag
action inc_tcp_ack
action inc_tcp_seq
action jump
action mark
action meter
action modify_field
action nvgre_decap
action nvgre_encap
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action port_id
action queue
action raw_decap
action raw_encap
action represented_port
action rss
action sample
action set_ipv4_dscp
action set_ipv4_dst
action set_ipv4_src
action set_ipv6_dscp
action set_ipv6_dst
action set_ipv6_src
action set_mac_dst
action set_mac_src
action set_meta
action set_tag
action set_tp_dst
action set_tp_src
action set_ttl
action vxlan_decap
action vxlan_encap
rte_flow doc out of sync for mvpp2
item { eth, mrvl_parse_eth },
item { ipv4, mrvl_parse_ip4 },
item { ipv6, mrvl_parse_ip6 },
item { raw, mrvl_parse_raw },
item { tcp, mrvl_parse_tcp },
item { udp, mrvl_parse_udp },
item { vlan, mrvl_parse_vlan },
item eth
item ipv4
item ipv6
item raw
item tcp
item udp
item vlan
action if (action->type
action drop
action meter
action queue
rte_flow doc out of sync for nfp
item geneve),
item gre),
item ipv4,
item ipv6),
item pmd_drv_log(debug, "eth detected");
item pmd_drv_log(debug, "geneve detected");
item pmd_drv_log(debug, "gre detected");
item pmd_drv_log(debug, "gre_key detected");
item pmd_drv_log(debug, "ipv4 detected");
item pmd_drv_log(debug, "ipv6 detected");
item pmd_drv_log(debug, "port_id detected");
item pmd_drv_log(debug, "sctp detected");
item pmd_drv_log(debug, "tcp detected");
item pmd_drv_log(debug, "udp detected");
item pmd_drv_log(debug, "vlan detected");
item pmd_drv_log(debug, "vxlan detected");
item sctp,
item udp,
item case eth:
item case geneve:
item case gre:
item case gre_key:
item case ipv4:
item case ipv6:
item case port_id:
item case sctp:
item case tcp:
item case udp:
item case vlan:
item case vxlan:
item case geneve:
item case gre:
item case vxlan:
item if (item->type
item eth
item geneve
item gre
item gre_key
item ipv4
item ipv6
item port_id
item sctp
item tcp
item udp
item vlan
item vxlan
action " of_push_vlan");
action " port_id");
action " raw_encap");
action " vxlan_encap");
action * of_set_vlan_pcp and
action * of_set_vlan_vid
action ((action + 2)->type !
action pmd_drv_log(debug, "count detected");
action pmd_drv_log(debug, "drop detected");
action pmd_drv_log(debug, "jump detected");
action pmd_drv_log(debug, "of_pop_vlan detected");
action pmd_drv_log(debug, "of_push_vlan detected");
action pmd_drv_log(debug, "of_set_vlan_pcp detected");
action pmd_drv_log(debug, "of_set_vlan_vid detected");
action pmd_drv_log(debug, "port_id detected");
action pmd_drv_log(debug, "process count");
action pmd_drv_log(debug, "process drop");
action pmd_drv_log(debug, "process jump");
action pmd_drv_log(debug, "process of_pop_vlan");
action pmd_drv_log(debug, "process of_push_vlan");
action pmd_drv_log(debug, "process port_id");
action pmd_drv_log(debug, "process raw_encap");
action pmd_drv_log(debug, "process set_ipv4_dscp");
action pmd_drv_log(debug, "process set_ipv4_dst");
action pmd_drv_log(debug, "process set_ipv4_src");
action pmd_drv_log(debug, "process set_ipv6_dscp");
action pmd_drv_log(debug, "process set_ipv6_dst");
action pmd_drv_log(debug, "process set_ipv6_src");
action pmd_drv_log(debug, "process set_mac_dst");
action pmd_drv_log(debug, "process set_mac_src");
action pmd_drv_log(debug, "process set_tp_dst");
action pmd_drv_log(debug, "process set_tp_src");
action pmd_drv_log(debug, "process set_ttl");
action pmd_drv_log(debug, "process tunnel_decap");
action pmd_drv_log(debug, "process vxlan_encap");
action pmd_drv_log(debug, "raw_decap detected");
action pmd_drv_log(debug, "raw_encap detected");
action pmd_drv_log(debug, "set_ipv4_dscp detected");
action pmd_drv_log(debug, "set_ipv4_dst detected");
action pmd_drv_log(debug, "set_ipv4_src detected");
action pmd_drv_log(debug, "set_ipv6_dscp detected");
action pmd_drv_log(debug, "set_ipv6_dst detected");
action pmd_drv_log(debug, "set_ipv6_src detected");
action pmd_drv_log(debug, "set_mac_dst detected");
action pmd_drv_log(debug, "set_mac_src detected");
action pmd_drv_log(debug, "set_tp_dst detected");
action pmd_drv_log(debug, "set_tp_src detected");
action pmd_drv_log(debug, "set_ttl detected");
action pmd_drv_log(debug, "vxlan_decap detected");
action pmd_drv_log(debug, "vxlan_encap detected");
action case count:
action case drop:
action case jump:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case port_id:
action case raw_decap:
action case raw_encap:
action case set_ipv4_dscp:
action case set_ipv4_dst:
action case set_ipv4_src:
action case set_ipv6_dscp:
action case set_ipv6_dst:
action case set_ipv6_src:
action case set_mac_dst:
action case set_mac_src:
action case set_tp_dst:
action case set_tp_src:
action case set_ttl:
action case vxlan_decap:
action case vxlan_encap:
action if (((action + 1)->type !
action count
action drop
action nvgre_decap
action nvgre_encap
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action port_id
action raw_decap
action raw_encap
action set_ipv4_dscp
action set_ipv4_dst
action set_ipv4_src
action set_ipv6_dscp
action set_ipv6_dst
action set_ipv6_src
action set_mac_dst
action set_mac_src
action set_tp_dst
action set_tp_src
action set_ttl
action vxlan_decap
action vxlan_encap
rte_flow doc out of sync for qede
item case ipv4:
item case ipv6:
item case tcp:
item case udp:
item ipv4
item ipv6
item tcp
item udp
action case drop:
action case queue:
action drop
action queue
rte_flow doc out of sync for sfc
item rte_bit64(ipv4) |
item rte_bit64(ipv6);
item rte_bit64(vlan), 0
item sfc_build_set_overflow(eth,
item sfc_build_set_overflow(ipv4,
item sfc_build_set_overflow(ipv6,
item sfc_build_set_overflow(udp,
item sfc_build_set_overflow(vlan,
item sfc_build_set_overflow(vxlan,
item case eth:
item case geneve:
item case ipv4:
item case ipv6:
item case mark:
item case nvgre:
item case udp:
item case vlan:
item case vxlan:
item item->type !
item case port_representor:
item case represented_port:
item exp_items
item ft_ctx->item.type
item if (item->type
item if (tunnel->type !
item eth
item geneve
item ipv4
item ipv6
item mark
item nvgre
item port_id
item port_representor
item pppoed
item pppoes
item represented_port
item tcp
item udp
item vlan
item vxlan
action (1ul << drop);
action (1ul << flag);
action (1ul << rss) |
action if (action->type !
action sfc_build_set_overflow(drop,
action sfc_build_set_overflow(flag,
action sfc_build_set_overflow(mark,
action sfc_build_set_overflow(queue,
action sfc_build_set_overflow(rss,
action case count:
action case drop:
action case flag:
action case jump:
action case mark:
action case queue:
action case rss:
action if (action->type
action sfc_build_set_overflow(count,
action sfc_build_set_overflow(dec_ttl,
action sfc_build_set_overflow(drop,
action sfc_build_set_overflow(flag,
action sfc_build_set_overflow(mark,
action sfc_build_set_overflow(of_dec_nw_ttl,
action sfc_build_set_overflow(of_pop_vlan,
action sfc_build_set_overflow(of_push_vlan,
action sfc_build_set_overflow(of_set_vlan_pcp,
action sfc_build_set_overflow(of_set_vlan_vid,
action sfc_build_set_overflow(pf,
action sfc_build_set_overflow(port_id,
action sfc_build_set_overflow(port_representor,
action sfc_build_set_overflow(represented_port,
action sfc_build_set_overflow(set_mac_dst,
action sfc_build_set_overflow(set_mac_src,
action sfc_build_set_overflow(vf,
action sfc_build_set_overflow(vxlan_decap,
action sfc_build_set_overflow(vxlan_encap,
action action->type !
action case count:
action case dec_ttl:
action case drop:
action case flag:
action case jump:
action case mark:
action case of_dec_nw_ttl:
action case of_pop_vlan:
action case of_push_vlan:
action case of_set_vlan_pcp:
action case of_set_vlan_vid:
action case pf:
action case port_id:
action case port_representor:
action case represented_port:
action case set_mac_dst:
action case set_mac_src:
action case vf:
action case vxlan_decap:
action case vxlan_encap:
action const uint32_t fate_actions_mask
action const uint32_t mark_actions_mask
action ft_ctx->action.type
action count
action dec_ttl
action drop
action flag
action jump
action mark
action of_dec_nw_ttl
action of_pop_vlan
action of_push_vlan
action of_set_vlan_pcp
action of_set_vlan_vid
action pf
action port_id
action port_representor
action queue
action represented_port
action rss
action set_mac_dst
action set_mac_src
action vf
action vxlan_decap
action vxlan_encap
rte_flow doc out of sync for tap
item ipv6),
item tcp),
item .type
item ipv4,
item ipv6),
item vlan,
item eth
item ipv4
item ipv6
item tcp
item udp
item vlan
action drop :
action passthru;
action { .type
action drop
action passthru
action queue
action rss
rte_flow doc out of sync for txgbe
item item->type
item item->type !
item item->type !
item item->type !
item if (item->type
item if (next->type !
item item->type !
item while (item->type !
item item->type
item e_tag
item eth
item fuzzy
item ipv4
item ipv6
item nvgre
item raw
item sctp
item tcp
item udp
item vlan
item vxlan
action act->type !
action act->type !
action * special case for flow action type security
action * special case for flow action type security.
action drop
action mark
action pf
action queue
action rss
action security
action vf
^ permalink raw reply [flat|nested] 88+ messages in thread
* RE: [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD
2022-10-25 3:17 ` Chaoyong He
@ 2022-10-25 3:29 ` Chaoyong He
0 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 3:29 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: oss-drivers, Niklas Soderlund, dev
> > On 10/22/2022 9:24 AM, Chaoyong He wrote:
> > > This is the third patch series to add the support of rte_flow
> > > offload for nfp PMD, includes:
> > > Add the offload support of decap/encap of VXLAN Add the offload
> > > support of decap/encap of GENEVE Add the offload support of
> > > decap/encap of NVGRE
> > >
> > > Changes since v1
> > > - Delete the modificaiton of release note.
> > > - Modify the commit title.
> > > - Rebase to the lastest logic.
> > >
> > > Chaoyong He (25):
> > > net/nfp: support IPv4 VXLAN flow item
> > > net/nfp: support IPv6 VXLAN flow item
> > > net/nfp: prepare for IPv4 tunnel encap flow action
> > > net/nfp: prepare for IPv6 tunnel encap flow action
> > > net/nfp: support IPv4 VXLAN encap flow action
> > > net/nfp: support IPv6 VXLAN encap flow action
> > > net/nfp: prepare for IPv4 UDP tunnel decap flow action
> > > net/nfp: prepare for IPv6 UDP tunnel decap flow action
> > > net/nfp: support IPv4 VXLAN decap flow action
> > > net/nfp: support IPv6 VXLAN decap flow action
> > > net/nfp: support IPv4 GENEVE encap flow action
> > > net/nfp: support IPv6 GENEVE encap flow action
> > > net/nfp: support IPv4 GENEVE flow item
> > > net/nfp: support IPv6 GENEVE flow item
> > > net/nfp: support IPv4 GENEVE decap flow action
> > > net/nfp: support IPv6 GENEVE decap flow action
> > > net/nfp: support IPv4 NVGRE encap flow action
> > > net/nfp: support IPv6 NVGRE encap flow action
> > > net/nfp: prepare for IPv4 GRE tunnel decap flow action
> > > net/nfp: prepare for IPv6 GRE tunnel decap flow action
> > > net/nfp: support IPv4 NVGRE flow item
> > > net/nfp: support IPv6 NVGRE flow item
> > > net/nfp: support IPv4 NVGRE decap flow action
> > > net/nfp: support IPv6 NVGRE decap flow action
> > > net/nfp: support new tunnel solution
> > >
> >
> > Hi Chaoyong,
> >
> > './devtools/check-doc-vs-code.sh' tools reports some inconsistency,
> > can you please fix it?
>
> Sorry, I can't quite understand the logic of this script.
> I'm quite sure I have listed the items and actions we supported in the file
> 'doc/guides/nics/features/nfp.ini'.
>
> And seems it report for every nic which support rte_flow?
Oh, I found the reason now.
The git version in my host is too low, I upgrade it and it can successfully run this script now.
I will revise it and sent out a new version patch, sorry for the bother.
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 00/26] add the extend rte_flow offload support of nfp PMD
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
` (25 preceding siblings ...)
2022-10-24 15:07 ` [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 01/26] net/nfp: fix the app stuck by CPP bridge service Chaoyong He
` (26 more replies)
26 siblings, 27 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
This is the third patch series to add the support of rte_flow offload for
nfp PMD, includes:
Add the offload support of decap/encap of VXLAN
Add the offload support of decap/encap of GENEVE
Add the offload support of decap/encap of NVGRE
Changes since v2
- Fix the inconsistency in 'nfp.ini' file.
- Modify the commit message about the new solution of tunnel decap action.
- Add a commit which fix the CPP bridge service stuck the DPDK app.
Changes since v1
- Delete the modificaiton of release note.
- Modify the commit title.
- Rebase to the lastest logic.
Chaoyong He (26):
net/nfp: fix the app stuck by CPP bridge service
net/nfp: support IPv4 VXLAN flow item
net/nfp: support IPv6 VXLAN flow item
net/nfp: prepare for IPv4 tunnel encap flow action
net/nfp: prepare for IPv6 tunnel encap flow action
net/nfp: support IPv4 VXLAN encap flow action
net/nfp: support IPv6 VXLAN encap flow action
net/nfp: prepare for IPv4 UDP tunnel decap flow action
net/nfp: prepare for IPv6 UDP tunnel decap flow action
net/nfp: support IPv4 VXLAN decap flow action
net/nfp: support IPv6 VXLAN decap flow action
net/nfp: support IPv4 GENEVE encap flow action
net/nfp: support IPv6 GENEVE encap flow action
net/nfp: support IPv4 GENEVE flow item
net/nfp: support IPv6 GENEVE flow item
net/nfp: support IPv4 GENEVE decap flow action
net/nfp: support IPv6 GENEVE decap flow action
net/nfp: support IPv4 NVGRE encap flow action
net/nfp: support IPv6 NVGRE encap flow action
net/nfp: prepare for IPv4 GRE tunnel decap flow action
net/nfp: prepare for IPv6 GRE tunnel decap flow action
net/nfp: support IPv4 NVGRE flow item
net/nfp: support IPv6 NVGRE flow item
net/nfp: support IPv4 NVGRE decap flow action
net/nfp: support IPv6 NVGRE decap flow action
net/nfp: support new solution for tunnel decap action
doc/guides/nics/features/nfp.ini | 9 +
drivers/net/nfp/flower/nfp_flower.c | 14 +
drivers/net/nfp/flower/nfp_flower.h | 24 +
drivers/net/nfp/flower/nfp_flower_cmsg.c | 222 +++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 415 +++++
drivers/net/nfp/nfp_cpp_bridge.c | 6 +
drivers/net/nfp/nfp_flow.c | 2003 +++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 56 +
8 files changed, 2680 insertions(+), 69 deletions(-)
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 01/26] net/nfp: fix the app stuck by CPP bridge service
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 02/26] net/nfp: support IPv4 VXLAN flow item Chaoyong He
` (25 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
The CPP (Command Pull Push) bridge service is needed for some debug
tools, but if no debug tools has ever been run, the initial logic of
CPP bridge service will block in accept() function call, and the
DPDK app can't exit normally.
Fixes: 678648abc64c ("net/nfp: fix service stuck on application exit")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
---
drivers/net/nfp/nfp_cpp_bridge.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c
index db4b781302..e650fe9930 100644
--- a/drivers/net/nfp/nfp_cpp_bridge.c
+++ b/drivers/net/nfp/nfp_cpp_bridge.c
@@ -379,6 +379,7 @@ nfp_cpp_bridge_service_func(void *args)
struct nfp_cpp *cpp;
struct nfp_pf_dev *pf_dev;
int sockfd, datafd, op, ret;
+ struct timeval timeout = {1, 0};
unlink("/tmp/nfp_cpp");
sockfd = socket(AF_UNIX, SOCK_STREAM, 0);
@@ -388,6 +389,8 @@ nfp_cpp_bridge_service_func(void *args)
return -EIO;
}
+ setsockopt(sockfd, SOL_SOCKET, SO_RCVTIMEO, (char *)&timeout, sizeof(timeout));
+
memset(&address, 0, sizeof(struct sockaddr));
address.sa_family = AF_UNIX;
@@ -415,6 +418,9 @@ nfp_cpp_bridge_service_func(void *args)
while (rte_service_runstate_get(pf_dev->cpp_bridge_id) != 0) {
datafd = accept(sockfd, NULL, NULL);
if (datafd < 0) {
+ if (errno == EAGAIN || errno == EWOULDBLOCK)
+ continue;
+
RTE_LOG(ERR, PMD, "%s: accept call error (%d)\n",
__func__, errno);
RTE_LOG(ERR, PMD, "%s: service failed\n", __func__);
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 02/26] net/nfp: support IPv4 VXLAN flow item
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
2022-10-25 7:58 ` [PATCH v3 01/26] net/nfp: fix the app stuck by CPP bridge service Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 03/26] net/nfp: support IPv6 " Chaoyong He
` (24 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding data structure and logics, to support
the offload of IPv4 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/flower/nfp_flower_cmsg.h | 35 ++++
drivers/net/nfp/nfp_flow.c | 243 ++++++++++++++++++++---
3 files changed, 246 insertions(+), 33 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 0184980e88..faaa7da83c 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -35,6 +35,7 @@ sctp = Y
tcp = Y
udp = Y
vlan = Y
+vxlan = Y
[rte_flow actions]
count = Y
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 6bf8ff7d56..08e2873808 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -324,6 +324,41 @@ struct nfp_flower_ipv6 {
uint8_t ipv6_dst[16];
};
+struct nfp_flower_tun_ipv4 {
+ rte_be32_t src;
+ rte_be32_t dst;
+};
+
+struct nfp_flower_tun_ip_ext {
+ uint8_t tos;
+ uint8_t ttl;
+};
+
+/*
+ * Flow Frame IPv4 UDP TUNNEL --> Tunnel details (5W/20B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv4_udp_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ rte_be16_t reserved1;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be32_t reserved2;
+ rte_be32_t tun_id;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 69fc8be7ed..0e1e5ea6b2 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -38,7 +38,8 @@ struct nfp_flow_item_proc {
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask);
+ bool is_mask,
+ bool is_outer_layer);
/* List of possible subsequent items. */
const enum rte_flow_item_type *const next_item;
};
@@ -491,6 +492,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
struct nfp_fl_key_ls *key_ls)
{
struct rte_eth_dev *ethdev;
+ bool outer_ip4_flag = false;
const struct rte_flow_item *item;
struct nfp_flower_representor *representor;
const struct rte_flow_item_port_id *port_id;
@@ -526,6 +528,8 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV4 detected");
key_ls->key_layer |= NFP_FLOWER_LAYER_IPV4;
key_ls->key_size += sizeof(struct nfp_flower_ipv4);
+ if (!outer_ip4_flag)
+ outer_ip4_flag = true;
break;
case RTE_FLOW_ITEM_TYPE_IPV6:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected");
@@ -547,6 +551,21 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
key_ls->key_layer |= NFP_FLOWER_LAYER_TP;
key_ls->key_size += sizeof(struct nfp_flower_tp_ports);
break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_VXLAN;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN;
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -719,12 +738,25 @@ nfp_flow_key_layers_calculate(const struct rte_flow_item items[],
return ret;
}
+static bool
+nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
+{
+ struct nfp_flower_meta_tci *meta_tci;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
+ return true;
+
+ return false;
+}
+
static int
nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_mac_mpls *eth;
const struct rte_flow_item_eth *spec;
@@ -760,7 +792,8 @@ nfp_flow_merge_vlan(struct rte_flow *nfp_flow,
__rte_unused char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_vlan *spec;
@@ -789,41 +822,58 @@ nfp_flow_merge_ipv4(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ bool is_outer_layer)
{
struct nfp_flower_ipv4 *ipv4;
const struct rte_ipv4_hdr *hdr;
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv4 *spec;
const struct rte_flow_item_ipv4 *mask;
+ struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
- if (spec == NULL) {
- PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
- goto ipv4_end;
- }
+ if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
+ return 0;
+ }
- /*
- * reserve space for L4 info.
- * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
- */
- if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
- *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off;
+ ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_udp_tun->ipv4.src = hdr->src_addr;
+ ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ } else {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
+ goto ipv4_end;
+ }
+
+ /*
+ * reserve space for L4 info.
+ * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
+ */
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv4 = (struct nfp_flower_ipv4 *)*mbuf_off;
- ipv4->ip_ext.tos = hdr->type_of_service;
- ipv4->ip_ext.proto = hdr->next_proto_id;
- ipv4->ip_ext.ttl = hdr->time_to_live;
- ipv4->ipv4_src = hdr->src_addr;
- ipv4->ipv4_dst = hdr->dst_addr;
+ ipv4->ip_ext.tos = hdr->type_of_service;
+ ipv4->ip_ext.proto = hdr->next_proto_id;
+ ipv4->ip_ext.ttl = hdr->time_to_live;
+ ipv4->ipv4_src = hdr->src_addr;
+ ipv4->ipv4_dst = hdr->dst_addr;
ipv4_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4);
+ *mbuf_off += sizeof(struct nfp_flower_ipv4);
+ }
return 0;
}
@@ -833,7 +883,8 @@ nfp_flow_merge_ipv6(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
struct nfp_flower_ipv6 *ipv6;
const struct rte_ipv6_hdr *hdr;
@@ -878,7 +929,8 @@ nfp_flow_merge_tcp(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
uint8_t tcp_flags;
struct nfp_flower_tp_ports *ports;
@@ -950,7 +1002,8 @@ nfp_flow_merge_udp(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ bool is_outer_layer)
{
char *ports_off;
struct nfp_flower_tp_ports *ports;
@@ -964,6 +1017,12 @@ nfp_flow_merge_udp(struct rte_flow *nfp_flow,
return 0;
}
+ /* Don't add L4 info if working on a inner layer pattern */
+ if (!is_outer_layer) {
+ PMD_DRV_LOG(INFO, "Detected inner layer UDP, skipping.");
+ return 0;
+ }
+
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) {
ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -
@@ -991,7 +1050,8 @@ nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
- bool is_mask)
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
{
char *ports_off;
struct nfp_flower_tp_ports *ports;
@@ -1027,10 +1087,42 @@ nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
return 0;
}
+static int
+nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ const struct rte_vxlan_hdr *hdr;
+ struct nfp_flower_ipv4_udp_tun *tun4;
+ const struct rte_flow_item_vxlan *spec;
+ const struct rte_flow_item_vxlan *mask;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge vxlan: no item->spec!");
+ goto vxlan_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = hdr->vx_vni;
+
+vxlan_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ return 0;
+}
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
- .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH,
+ RTE_FLOW_ITEM_TYPE_IPV4),
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
@@ -1113,6 +1205,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
.merge = nfp_flow_merge_tcp,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN),
.mask_support = &(const struct rte_flow_item_udp){
.hdr = {
.src_port = RTE_BE16(0xffff),
@@ -1134,6 +1227,17 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
.mask_sz = sizeof(struct rte_flow_item_sctp),
.merge = nfp_flow_merge_sctp,
},
+ [RTE_FLOW_ITEM_TYPE_VXLAN] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &(const struct rte_flow_item_vxlan){
+ .hdr = {
+ .vx_vni = RTE_BE32(0xffffff00),
+ },
+ },
+ .mask_default = &rte_flow_item_vxlan_mask,
+ .mask_sz = sizeof(struct rte_flow_item_vxlan),
+ .merge = nfp_flow_merge_vxlan,
+ },
};
static int
@@ -1187,21 +1291,53 @@ nfp_flow_item_check(const struct rte_flow_item *item,
return ret;
}
+static bool
+nfp_flow_is_tun_item(const struct rte_flow_item *item)
+{
+ if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+ return true;
+
+ return false;
+}
+
+static bool
+nfp_flow_inner_item_get(const struct rte_flow_item items[],
+ const struct rte_flow_item **inner_item)
+{
+ const struct rte_flow_item *item;
+
+ *inner_item = items;
+
+ for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ if (nfp_flow_is_tun_item(item)) {
+ *inner_item = ++item;
+ return true;
+ }
+ }
+
+ return false;
+}
+
static int
nfp_flow_compile_item_proc(const struct rte_flow_item items[],
struct rte_flow *nfp_flow,
char **mbuf_off_exact,
- char **mbuf_off_mask)
+ char **mbuf_off_mask,
+ bool is_outer_layer)
{
int i;
int ret = 0;
+ bool continue_flag = true;
const struct rte_flow_item *item;
const struct nfp_flow_item_proc *proc_list;
proc_list = nfp_flow_item_proc_list;
- for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END; ++item) {
+ for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) {
const struct nfp_flow_item_proc *proc = NULL;
+ if (nfp_flow_is_tun_item(item))
+ continue_flag = false;
+
for (i = 0; proc_list->next_item && proc_list->next_item[i]; ++i) {
if (proc_list->next_item[i] == item->type) {
proc = &nfp_flow_item_proc_list[item->type];
@@ -1230,14 +1366,14 @@ nfp_flow_compile_item_proc(const struct rte_flow_item items[],
}
ret = proc->merge(nfp_flow, mbuf_off_exact, item,
- proc, false);
+ proc, false, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type);
break;
}
ret = proc->merge(nfp_flow, mbuf_off_mask, item,
- proc, true);
+ proc, true, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type);
break;
@@ -1257,6 +1393,9 @@ nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
int ret;
char *mbuf_off_mask;
char *mbuf_off_exact;
+ bool is_tun_flow = false;
+ bool is_outer_layer = true;
+ const struct rte_flow_item *loop_item;
mbuf_off_exact = nfp_flow->payload.unmasked_data +
sizeof(struct nfp_flower_meta_tci) +
@@ -1265,14 +1404,29 @@ nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
sizeof(struct nfp_flower_meta_tci) +
sizeof(struct nfp_flower_in_port);
+ /* Check if this is a tunnel flow and get the inner item*/
+ is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
+ if (is_tun_flow)
+ is_outer_layer = false;
+
/* Go over items */
- ret = nfp_flow_compile_item_proc(items, nfp_flow,
- &mbuf_off_exact, &mbuf_off_mask);
+ ret = nfp_flow_compile_item_proc(loop_item, nfp_flow,
+ &mbuf_off_exact, &mbuf_off_mask, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item compile failed.");
return -EINVAL;
}
+ /* Go over inner items */
+ if (is_tun_flow) {
+ ret = nfp_flow_compile_item_proc(items, nfp_flow,
+ &mbuf_off_exact, &mbuf_off_mask, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "nfp flow outer item compile failed.");
+ return -EINVAL;
+ }
+ }
+
return 0;
}
@@ -2119,12 +2273,35 @@ nfp_flow_query(struct rte_eth_dev *dev,
return 0;
}
+static int
+nfp_flow_tunnel_match(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_flow_tunnel *tunnel,
+ __rte_unused struct rte_flow_item **pmd_items,
+ uint32_t *num_of_items,
+ __rte_unused struct rte_flow_error *err)
+{
+ *num_of_items = 0;
+
+ return 0;
+}
+
+static int
+nfp_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev,
+ __rte_unused struct rte_flow_item *pmd_items,
+ __rte_unused uint32_t num_of_items,
+ __rte_unused struct rte_flow_error *err)
+{
+ return 0;
+}
+
static const struct rte_flow_ops nfp_flow_ops = {
.validate = nfp_flow_validate,
.create = nfp_flow_create,
.destroy = nfp_flow_destroy,
.flush = nfp_flow_flush,
.query = nfp_flow_query,
+ .tunnel_match = nfp_flow_tunnel_match,
+ .tunnel_item_release = nfp_flow_tunnel_item_release,
};
int
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 03/26] net/nfp: support IPv6 VXLAN flow item
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
2022-10-25 7:58 ` [PATCH v3 01/26] net/nfp: fix the app stuck by CPP bridge service Chaoyong He
2022-10-25 7:58 ` [PATCH v3 02/26] net/nfp: support IPv4 VXLAN flow item Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 04/26] net/nfp: prepare for IPv4 tunnel encap flow action Chaoyong He
` (23 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding data structure and logics, to support
the offload of IPv6 VXLAN item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 42 +++++++++
drivers/net/nfp/nfp_flow.c | 113 +++++++++++++++++------
2 files changed, 129 insertions(+), 26 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 08e2873808..996ba3b982 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -329,6 +329,11 @@ struct nfp_flower_tun_ipv4 {
rte_be32_t dst;
};
+struct nfp_flower_tun_ipv6 {
+ uint8_t ipv6_src[16];
+ uint8_t ipv6_dst[16];
+};
+
struct nfp_flower_tun_ip_ext {
uint8_t tos;
uint8_t ttl;
@@ -359,6 +364,43 @@ struct nfp_flower_ipv4_udp_tun {
rte_be32_t tun_id;
};
+/*
+ * Flow Frame IPv6 UDP TUNNEL --> Tunnel details (11W/44B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | VNI | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv6_udp_tun {
+ struct nfp_flower_tun_ipv6 ipv6;
+ rte_be16_t reserved1;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be32_t reserved2;
+ rte_be32_t tun_id;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 0e1e5ea6b2..bbd9dbabde 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -493,6 +493,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
{
struct rte_eth_dev *ethdev;
bool outer_ip4_flag = false;
+ bool outer_ip6_flag = false;
const struct rte_flow_item *item;
struct nfp_flower_representor *representor;
const struct rte_flow_item_port_id *port_id;
@@ -535,6 +536,8 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_IPV6 detected");
key_ls->key_layer |= NFP_FLOWER_LAYER_IPV6;
key_ls->key_size += sizeof(struct nfp_flower_ipv6);
+ if (!outer_ip6_flag)
+ outer_ip6_flag = true;
break;
case RTE_FLOW_ITEM_TYPE_TCP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_TCP detected");
@@ -553,8 +556,9 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
break;
case RTE_FLOW_ITEM_TYPE_VXLAN:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_VXLAN detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_VXLAN;
key_ls->key_layer |= NFP_FLOWER_LAYER_VXLAN;
if (outer_ip4_flag) {
@@ -564,6 +568,19 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
* in `struct nfp_flower_ipv4_udp_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for VXLAN tunnel.");
+ return -EINVAL;
}
break;
default:
@@ -884,42 +901,61 @@ nfp_flow_merge_ipv6(struct rte_flow *nfp_flow,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
bool is_mask,
- __rte_unused bool is_outer_layer)
+ bool is_outer_layer)
{
struct nfp_flower_ipv6 *ipv6;
const struct rte_ipv6_hdr *hdr;
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv6 *spec;
const struct rte_flow_item_ipv6 *mask;
+ struct nfp_flower_ipv6_udp_tun *ipv6_udp_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
- if (spec == NULL) {
- PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
- goto ipv6_end;
- }
+ if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
+ return 0;
+ }
- /*
- * reserve space for L4 info.
- * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4
- */
- if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
- *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+
+ ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_src));
+ memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+ } else {
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
+ goto ipv6_end;
+ }
- hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv6 = (struct nfp_flower_ipv6 *)*mbuf_off;
+ /*
+ * reserve space for L4 info.
+ * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6
+ */
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ *mbuf_off += sizeof(struct nfp_flower_tp_ports);
+
+ hdr = is_mask ? &mask->hdr : &spec->hdr;
+ ipv6 = (struct nfp_flower_ipv6 *)*mbuf_off;
- ipv6->ip_ext.tos = (hdr->vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
- RTE_IPV6_HDR_TC_SHIFT;
- ipv6->ip_ext.proto = hdr->proto;
- ipv6->ip_ext.ttl = hdr->hop_limits;
- memcpy(ipv6->ipv6_src, hdr->src_addr, sizeof(ipv6->ipv6_src));
- memcpy(ipv6->ipv6_dst, hdr->dst_addr, sizeof(ipv6->ipv6_dst));
+ ipv6->ip_ext.tos = (hdr->vtc_flow & RTE_IPV6_HDR_TC_MASK) >>
+ RTE_IPV6_HDR_TC_SHIFT;
+ ipv6->ip_ext.proto = hdr->proto;
+ ipv6->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6->ipv6_src, hdr->src_addr, sizeof(ipv6->ipv6_src));
+ memcpy(ipv6->ipv6_dst, hdr->dst_addr, sizeof(ipv6->ipv6_dst));
ipv6_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv6);
+ *mbuf_off += sizeof(struct nfp_flower_ipv6);
+ }
return 0;
}
@@ -1088,7 +1124,7 @@ nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
+nfp_flow_merge_vxlan(struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1097,8 +1133,15 @@ nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
{
const struct rte_vxlan_hdr *hdr;
struct nfp_flower_ipv4_udp_tun *tun4;
+ struct nfp_flower_ipv6_udp_tun *tun6;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_vxlan *spec;
const struct rte_flow_item_vxlan *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1109,11 +1152,21 @@ nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
mask = item->mask ? item->mask : proc->mask_default;
hdr = is_mask ? &mask->hdr : &spec->hdr;
- tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- tun4->tun_id = hdr->vx_vni;
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+ tun6->tun_id = hdr->vx_vni;
+ } else {
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = hdr->vx_vni;
+ }
vxlan_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6))
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
+ else
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
return 0;
}
@@ -1122,7 +1175,8 @@ nfp_flow_merge_vxlan(__rte_unused struct rte_flow *nfp_flow,
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH,
- RTE_FLOW_ITEM_TYPE_IPV4),
+ RTE_FLOW_ITEM_TYPE_IPV4,
+ RTE_FLOW_ITEM_TYPE_IPV6),
},
[RTE_FLOW_ITEM_TYPE_ETH] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN,
@@ -1395,6 +1449,7 @@ nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
char *mbuf_off_exact;
bool is_tun_flow = false;
bool is_outer_layer = true;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item *loop_item;
mbuf_off_exact = nfp_flow->payload.unmasked_data +
@@ -1404,6 +1459,12 @@ nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
sizeof(struct nfp_flower_meta_tci) +
sizeof(struct nfp_flower_in_port);
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) {
+ mbuf_off_exact += sizeof(struct nfp_flower_ext_meta);
+ mbuf_off_mask += sizeof(struct nfp_flower_ext_meta);
+ }
+
/* Check if this is a tunnel flow and get the inner item*/
is_tun_flow = nfp_flow_inner_item_get(items, &loop_item);
if (is_tun_flow)
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 04/26] net/nfp: prepare for IPv4 tunnel encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (2 preceding siblings ...)
2022-10-25 7:58 ` [PATCH v3 03/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 05/26] net/nfp: prepare for IPv6 " Chaoyong He
` (22 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the encap action of IPv4 tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 29 ++++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 93 ++++++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 88 ++++++++++++++++++++++
drivers/net/nfp/nfp_flow.h | 27 +++++++
4 files changed, 237 insertions(+)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 15d838117a..7021d1fd43 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -246,3 +246,32 @@ nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
return 0;
}
+
+int
+nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v4 *payload)
+{
+ uint16_t cnt;
+ size_t msg_len;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_neigh_v4 *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v4 tun neigh");
+ return -ENOMEM;
+ }
+
+ msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v4);
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH, msg_len);
+ memcpy(msg, payload, msg_len);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 996ba3b982..e44e311176 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -129,6 +129,36 @@ struct nfp_flower_cmsg_port_mod {
rte_be16_t mtu;
};
+struct nfp_flower_tun_neigh {
+ uint8_t dst_mac[RTE_ETHER_ADDR_LEN];
+ uint8_t src_mac[RTE_ETHER_ADDR_LEN];
+ rte_be32_t port_id;
+};
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V4
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | DST_IPV4 |
+ * +---------------------------------------------------------------+
+ * 1 | SRC_IPV4 |
+ * +---------------------------------------------------------------+
+ * 2 | DST_MAC_B5_B4_B3_B2 |
+ * +-------------------------------+-------------------------------+
+ * 3 | DST_MAC_B1_B0 | SRC_MAC_B5_B4 |
+ * +-------------------------------+-------------------------------+
+ * 4 | SRC_MAC_B3_B2_B1_B0 |
+ * +---------------------------------------------------------------+
+ * 5 | Egress Port (NFP internal) |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_neigh_v4 {
+ rte_be32_t dst_ipv4;
+ rte_be32_t src_ipv4;
+ struct nfp_flower_tun_neigh common;
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -574,6 +604,67 @@ struct nfp_fl_act_set_tport {
rte_be16_t dst_port;
};
+/*
+ * Pre-tunnel
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | - | opcode | |jump_id| - |M| - |V|
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_127_96 / ipv4_daddr |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_95_64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_63_32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_daddr_31_0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_fl_act_pre_tun {
+ struct nfp_fl_act_head head;
+ rte_be16_t flags;
+ union {
+ rte_be32_t ipv4_dst;
+ uint8_t ipv6_dst[16];
+ };
+};
+
+/*
+ * Set tunnel
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | res | opcode | res | len_lw| reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_id0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_id1 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved | type |r| idx |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_flags | ttl | tos |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved_cvs1 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | reserved_cvs2 | reserved_cvs3 |
+ * | var_flags | var_np |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_fl_act_set_tun {
+ struct nfp_fl_act_head head;
+ rte_be16_t reserved;
+ rte_be64_t tun_id;
+ rte_be32_t tun_type_index;
+ rte_be16_t tun_flags;
+ uint8_t ttl;
+ uint8_t tos;
+ rte_be16_t outer_vlan_tpid;
+ rte_be16_t outer_vlan_tci;
+ uint8_t tun_len; /* Only valid for NFP_FL_TUNNEL_GENEVE */
+ uint8_t reserved2;
+ rte_be16_t tun_proto; /* Only valid for NFP_FL_TUNNEL_GENEVE */
+} __rte_packed;
+
int nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower);
int nfp_flower_cmsg_repr_reify(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_representor *repr);
@@ -583,5 +674,7 @@ int nfp_flower_cmsg_flow_delete(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
int nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
+int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v4 *payload);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index bbd9dbabde..f71f8b1d5b 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1772,6 +1772,91 @@ nfp_flow_action_set_tc(char *act_data,
tc_hl->reserved = 0;
}
+__rte_unused static void
+nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
+ rte_be32_t ipv4_dst)
+{
+ pre_tun->head.jump_id = NFP_FL_ACTION_OPCODE_PRE_TUNNEL;
+ pre_tun->head.len_lw = sizeof(struct nfp_fl_act_pre_tun) >> NFP_FL_LW_SIZ;
+ pre_tun->ipv4_dst = ipv4_dst;
+}
+
+__rte_unused static void
+nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
+ enum nfp_flower_tun_type tun_type,
+ uint64_t tun_id,
+ uint8_t ttl,
+ uint8_t tos)
+{
+ /* Currently only support one pre-tunnel, so index is always 0. */
+ uint8_t pretun_idx = 0;
+ uint32_t tun_type_index;
+
+ tun_type_index = ((tun_type << 4) & 0xf0) | (pretun_idx & 0x07);
+
+ set_tun->head.jump_id = NFP_FL_ACTION_OPCODE_SET_TUNNEL;
+ set_tun->head.len_lw = sizeof(struct nfp_fl_act_set_tun) >> NFP_FL_LW_SIZ;
+ set_tun->tun_type_index = rte_cpu_to_be_32(tun_type_index);
+ set_tun->tun_id = rte_cpu_to_be_64(tun_id);
+ set_tun->ttl = ttl;
+ set_tun->tos = tos;
+}
+
+__rte_unused static int
+nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun,
+ const struct rte_ether_hdr *eth,
+ const struct rte_flow_item_ipv4 *ipv4)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ tun->payload.v6_flag = 0;
+ tun->payload.dst.dst_ipv4 = ipv4->hdr.dst_addr;
+ tun->payload.src.src_ipv4 = ipv4->hdr.src_addr;
+ memcpy(tun->payload.dst_addr, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ port = (struct nfp_flower_in_port *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4->hdr.dst_addr;
+ payload.src_ipv4 = ipv4->hdr.src_addr;
+ memcpy(payload.common.dst_mac, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
+__rte_unused static int
+nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2487,6 +2572,9 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
goto free_mask_table;
}
+ /* neighbor next list */
+ LIST_INIT(&priv->nn_list);
+
return 0;
free_mask_table:
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 0ad89e51f4..892dbc08f1 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -90,6 +90,11 @@ enum nfp_flower_tun_type {
NFP_FL_TUN_GENEVE = 4,
};
+enum nfp_flow_type {
+ NFP_FLOW_COMMON,
+ NFP_FLOW_ENCAP,
+};
+
struct nfp_fl_key_ls {
uint32_t key_layer_two;
uint8_t key_layer;
@@ -118,6 +123,24 @@ struct nfp_fl_payload {
char *action_data;
};
+struct nfp_fl_tun {
+ LIST_ENTRY(nfp_fl_tun) next;
+ uint8_t ref_cnt;
+ struct nfp_fl_tun_entry {
+ uint8_t v6_flag;
+ uint8_t dst_addr[RTE_ETHER_ADDR_LEN];
+ uint8_t src_addr[RTE_ETHER_ADDR_LEN];
+ union {
+ rte_be32_t dst_ipv4;
+ uint8_t dst_ipv6[16];
+ } dst;
+ union {
+ rte_be32_t src_ipv4;
+ uint8_t src_ipv6[16];
+ } src;
+ } payload;
+};
+
#define CIRC_CNT(head, tail, size) (((head) - (tail)) & ((size) - 1))
#define CIRC_SPACE(head, tail, size) CIRC_CNT((tail), ((head) + 1), (size))
struct circ_buf {
@@ -161,13 +184,17 @@ struct nfp_flow_priv {
struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
struct nfp_fl_stats *stats; /**< Store stats of flow. */
rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+ /* neighbor next */
+ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
struct rte_flow {
struct nfp_fl_payload payload;
+ struct nfp_fl_tun tun;
size_t length;
uint32_t hash_key;
bool install_flag;
+ enum nfp_flow_type type;
};
int nfp_flow_priv_init(struct nfp_pf_dev *pf_dev);
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 05/26] net/nfp: prepare for IPv6 tunnel encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (3 preceding siblings ...)
2022-10-25 7:58 ` [PATCH v3 04/26] net/nfp: prepare for IPv4 tunnel encap flow action Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 06/26] net/nfp: support IPv4 VXLAN " Chaoyong He
` (21 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the encap action of IPv6 tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 29 +++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 40 +++++++++
drivers/net/nfp/nfp_flow.c | 105 ++++++++++++++++++++++-
3 files changed, 173 insertions(+), 1 deletion(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 7021d1fd43..8983178378 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -275,3 +275,32 @@ nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
return 0;
}
+
+int
+nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v6 *payload)
+{
+ uint16_t cnt;
+ size_t msg_len;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_neigh_v6 *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v6 tun neigh");
+ return -ENOMEM;
+ }
+
+ msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v6);
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6, msg_len);
+ memcpy(msg, payload, msg_len);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index e44e311176..d1e0562cf9 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -159,6 +159,42 @@ struct nfp_flower_cmsg_tun_neigh_v4 {
struct nfp_flower_tun_neigh common;
};
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | DST_IPV6 [0] |
+ * +---------------------------------------------------------------+
+ * 1 | DST_IPV6 [1] |
+ * +---------------------------------------------------------------+
+ * 2 | DST_IPV6 [2] |
+ * +---------------------------------------------------------------+
+ * 3 | DST_IPV6 [3] |
+ * +---------------------------------------------------------------+
+ * 4 | SRC_IPV6 [0] |
+ * +---------------------------------------------------------------+
+ * 5 | SRC_IPV6 [1] |
+ * +---------------------------------------------------------------+
+ * 6 | SRC_IPV6 [2] |
+ * +---------------------------------------------------------------+
+ * 7 | SRC_IPV6 [3] |
+ * +---------------------------------------------------------------+
+ * 8 | DST_MAC_B5_B4_B3_B2 |
+ * +-------------------------------+-------------------------------+
+ * 9 | DST_MAC_B1_B0 | SRC_MAC_B5_B4 |
+ * +-------------------------------+-------------------------------+
+ * 10 | SRC_MAC_B3_B2_B1_B0 |
+ * +---------------+---------------+---------------+---------------+
+ * 11 | Egress Port (NFP internal) |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_neigh_v6 {
+ uint8_t dst_ipv6[16];
+ uint8_t src_ipv6[16];
+ struct nfp_flower_tun_neigh common;
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -629,6 +665,8 @@ struct nfp_fl_act_pre_tun {
};
};
+#define NFP_FL_PRE_TUN_IPV6 (1 << 0)
+
/*
* Set tunnel
* 3 2 1
@@ -676,5 +714,7 @@ int nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *flow);
int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v4 *payload);
+int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_flower_cmsg_tun_neigh_v6 *payload);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index f71f8b1d5b..e1b892f303 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1781,6 +1781,16 @@ nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
pre_tun->ipv4_dst = ipv4_dst;
}
+__rte_unused static void
+nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
+ const uint8_t ipv6_dst[])
+{
+ pre_tun->head.jump_id = NFP_FL_ACTION_OPCODE_PRE_TUNNEL;
+ pre_tun->head.len_lw = sizeof(struct nfp_fl_act_pre_tun) >> NFP_FL_LW_SIZ;
+ pre_tun->flags = rte_cpu_to_be_16(NFP_FL_PRE_TUN_IPV6);
+ memcpy(pre_tun->ipv6_dst, ipv6_dst, sizeof(pre_tun->ipv6_dst));
+}
+
__rte_unused static void
nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
enum nfp_flower_tun_type tun_type,
@@ -1845,7 +1855,7 @@ nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
{
@@ -1857,6 +1867,99 @@ nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun,
+ const struct rte_ether_hdr *eth,
+ const struct rte_flow_item_ipv6 *ipv6)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ tun->payload.v6_flag = 1;
+ memcpy(tun->payload.dst.dst_ipv6, ipv6->hdr.dst_addr, sizeof(tun->payload.dst.dst_ipv6));
+ memcpy(tun->payload.src.src_ipv6, ipv6->hdr.src_addr, sizeof(tun->payload.src.src_ipv6));
+ memcpy(tun->payload.dst_addr, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ port = (struct nfp_flower_in_port *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6->hdr.dst_addr, sizeof(payload.dst_ipv6));
+ memcpy(payload.src_ipv6, ipv6->hdr.src_addr, sizeof(payload.src_ipv6));
+ memcpy(payload.common.dst_mac, eth->dst_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->src_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
+static int
+nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t *ipv6)
+{
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6, sizeof(payload.dst_ipv6));
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
+__rte_unused static int
+nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ bool flag = false;
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+
+ tun = &nfp_flow->tun;
+ LIST_FOREACH(tmp, &app_fw_flower->flow_priv->nn_list, next) {
+ ret = memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry));
+ if (ret == 0) {
+ tmp->ref_cnt--;
+ flag = true;
+ break;
+ }
+ }
+
+ if (!flag) {
+ PMD_DRV_LOG(DEBUG, "Can't find nn entry in the nn list");
+ return -EINVAL;
+ }
+
+ if (tmp->ref_cnt == 0) {
+ LIST_REMOVE(tmp, next);
+ if (tmp->payload.v6_flag != 0) {
+ return nfp_flower_del_tun_neigh_v6(app_fw_flower,
+ tmp->payload.dst.dst_ipv6);
+ } else {
+ return nfp_flower_del_tun_neigh_v4(app_fw_flower,
+ tmp->payload.dst.dst_ipv4);
+ }
+ }
+
+ return 0;
+}
+
static int
nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 06/26] net/nfp: support IPv4 VXLAN encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (4 preceding siblings ...)
2022-10-25 7:58 ` [PATCH v3 05/26] net/nfp: prepare for IPv6 " Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 07/26] net/nfp: support IPv6 " Chaoyong He
` (20 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 132 +++++++++++++++++++++++++++++--
2 files changed, 128 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index faaa7da83c..ff97787bd9 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -56,3 +56,4 @@ set_mac_src = Y
set_tp_dst = Y
set_tp_src = Y
set_ttl = Y
+vxlan_encap = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index e1b892f303..d2e779ca96 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -10,8 +10,10 @@
#include <rte_malloc.h>
#include "nfp_common.h"
+#include "nfp_ctrl.h"
#include "nfp_flow.h"
#include "nfp_logs.h"
+#include "nfp_rxtx.h"
#include "flower/nfp_flower.h"
#include "flower/nfp_flower_cmsg.h"
#include "flower/nfp_flower_ctrl.h"
@@ -19,6 +21,17 @@
#include "nfpcore/nfp_mip.h"
#include "nfpcore/nfp_rtsym.h"
+/*
+ * Maximum number of items in struct rte_flow_action_vxlan_encap.
+ * ETH / IPv4(6) / UDP / VXLAN / END
+ */
+#define ACTION_VXLAN_ENCAP_ITEMS_NUM 5
+
+struct vxlan_data {
+ struct rte_flow_action_vxlan_encap conf;
+ struct rte_flow_item items[ACTION_VXLAN_ENCAP_ITEMS_NUM];
+};
+
/* Static initializer for a list of subsequent item types */
#define NEXT_ITEM(...) \
((const enum rte_flow_item_type []){ \
@@ -724,6 +737,11 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
tc_hl_flag = true;
}
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP detected");
+ key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
+ key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1772,7 +1790,7 @@ nfp_flow_action_set_tc(char *act_data,
tc_hl->reserved = 0;
}
-__rte_unused static void
+static void
nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
rte_be32_t ipv4_dst)
{
@@ -1791,7 +1809,7 @@ nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
memcpy(pre_tun->ipv6_dst, ipv6_dst, sizeof(pre_tun->ipv6_dst));
}
-__rte_unused static void
+static void
nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
enum nfp_flower_tun_type tun_type,
uint64_t tun_id,
@@ -1812,7 +1830,7 @@ nfp_flow_set_tun_process(struct nfp_fl_act_set_tun *set_tun,
set_tun->tos = tos;
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct nfp_fl_tun *tun,
@@ -1922,7 +1940,7 @@ nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -1961,7 +1979,81 @@ nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
}
static int
-nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
+nfp_flow_action_vxlan_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct vxlan_data *vxlan_data,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ const struct rte_flow_item_eth *eth;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_vxlan *vxlan;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_flow_item_eth *)vxlan_data->items[0].spec;
+ ipv4 = (const struct rte_flow_item_ipv4 *)vxlan_data->items[1].spec;
+ vxlan = (const struct rte_flow_item_vxlan *)vxlan_data->items[3].spec;
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_VXLAN, vxlan->hdr.vx_vni,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_flags = vxlan->hdr.vx_flags;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, ð->hdr, ipv4);
+}
+
+static int
+nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ size_t act_len;
+ size_t act_pre_size;
+ const struct vxlan_data *vxlan_data;
+
+ vxlan_data = action->conf;
+ if (vxlan_data->items[0].type != RTE_FLOW_ITEM_TYPE_ETH ||
+ vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ vxlan_data->items[2].type != RTE_FLOW_ITEM_TYPE_UDP ||
+ vxlan_data->items[3].type != RTE_FLOW_ITEM_TYPE_VXLAN ||
+ vxlan_data->items[4].type != RTE_FLOW_ITEM_TYPE_END) {
+ PMD_DRV_LOG(ERR, "Not an valid vxlan action conf.");
+ return -EINVAL;
+ }
+
+ /*
+ * Pre_tunnel action must be the first on the action list.
+ * If other actions already exist, they need to be pushed forward.
+ */
+ act_len = act_data - actions;
+ if (act_len != 0) {
+ act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ memmove(actions + act_pre_size, actions, act_len);
+ }
+
+ if (vxlan_data->items[1].type == RTE_FLOW_ITEM_TYPE_IPV4)
+ return nfp_flow_action_vxlan_encap_v4(app_fw_flower, act_data,
+ actions, vxlan_data, nfp_flow_meta, tun);
+
+ return 0;
+}
+
+static int
+nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
struct rte_flow *nfp_flow)
{
@@ -2118,6 +2210,20 @@ nfp_flow_compile_action(__rte_unused struct nfp_flower_representor *representor,
tc_hl_flag = true;
}
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP");
+ ret = nfp_flow_action_vxlan_encap(representor->app_fw_flower,
+ position, action_data, action, nfp_flow_meta,
+ &nfp_flow->tun);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process"
+ " RTE_FLOW_ACTION_TYPE_VXLAN_ENCAP");
+ return ret;
+ }
+ position += sizeof(struct nfp_fl_act_pre_tun);
+ position += sizeof(struct nfp_fl_act_set_tun);
+ nfp_flow->type = NFP_FLOW_ENCAP;
+ break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
return -ENOTSUP;
@@ -2410,6 +2516,22 @@ nfp_flow_destroy(struct rte_eth_dev *dev,
goto exit;
}
+ switch (nfp_flow->type) {
+ case NFP_FLOW_COMMON:
+ break;
+ case NFP_FLOW_ENCAP:
+ /* Delete the entry from nn table */
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Invalid nfp flow type %d.", nfp_flow->type);
+ ret = -EINVAL;
+ break;
+ }
+
+ if (ret != 0)
+ goto exit;
+
/* Delete the flow from hardware */
if (nfp_flow->install_flag) {
ret = nfp_flower_cmsg_flow_delete(app_fw_flower, nfp_flow);
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 07/26] net/nfp: support IPv6 VXLAN encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (5 preceding siblings ...)
2022-10-25 7:58 ` [PATCH v3 06/26] net/nfp: support IPv4 VXLAN " Chaoyong He
@ 2022-10-25 7:58 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 08/26] net/nfp: prepare for IPv4 UDP tunnel decap " Chaoyong He
` (19 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:58 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv6 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 48 ++++++++++++++++++++++++++++++++++----
1 file changed, 43 insertions(+), 5 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index d2e779ca96..9ee02b0fb9 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1799,7 +1799,7 @@ nfp_flow_pre_tun_v4_process(struct nfp_fl_act_pre_tun *pre_tun,
pre_tun->ipv4_dst = ipv4_dst;
}
-__rte_unused static void
+static void
nfp_flow_pre_tun_v6_process(struct nfp_fl_act_pre_tun *pre_tun,
const uint8_t ipv6_dst[])
{
@@ -1885,7 +1885,7 @@ nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct nfp_fl_tun *tun,
@@ -2013,6 +2013,42 @@ nfp_flow_action_vxlan_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
tun, ð->hdr, ipv4);
}
+static int
+nfp_flow_action_vxlan_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct vxlan_data *vxlan_data,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ const struct rte_flow_item_eth *eth;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_vxlan *vxlan;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_flow_item_eth *)vxlan_data->items[0].spec;
+ ipv6 = (const struct rte_flow_item_ipv6 *)vxlan_data->items[1].spec;
+ vxlan = (const struct rte_flow_item_vxlan *)vxlan_data->items[3].spec;
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_VXLAN, vxlan->hdr.vx_vni,
+ ipv6->hdr.hop_limits,
+ (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff);
+ set_tun->tun_flags = vxlan->hdr.vx_flags;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, ð->hdr, ipv6);
+}
+
static int
nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
@@ -2027,7 +2063,8 @@ nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
vxlan_data = action->conf;
if (vxlan_data->items[0].type != RTE_FLOW_ITEM_TYPE_ETH ||
- vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 ||
+ (vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV4 &&
+ vxlan_data->items[1].type != RTE_FLOW_ITEM_TYPE_IPV6) ||
vxlan_data->items[2].type != RTE_FLOW_ITEM_TYPE_UDP ||
vxlan_data->items[3].type != RTE_FLOW_ITEM_TYPE_VXLAN ||
vxlan_data->items[4].type != RTE_FLOW_ITEM_TYPE_END) {
@@ -2048,8 +2085,9 @@ nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
if (vxlan_data->items[1].type == RTE_FLOW_ITEM_TYPE_IPV4)
return nfp_flow_action_vxlan_encap_v4(app_fw_flower, act_data,
actions, vxlan_data, nfp_flow_meta, tun);
-
- return 0;
+ else
+ return nfp_flow_action_vxlan_encap_v6(app_fw_flower, act_data,
+ actions, vxlan_data, nfp_flow_meta, tun);
}
static int
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 08/26] net/nfp: prepare for IPv4 UDP tunnel decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (6 preceding siblings ...)
2022-10-25 7:58 ` [PATCH v3 07/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 09/26] net/nfp: prepare for IPv6 " Chaoyong He
` (18 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the decap action of IPv4 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/flower/nfp_flower_cmsg.c | 118 ++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 94 +++++
drivers/net/nfp/nfp_flow.c | 461 ++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 17 +
5 files changed, 676 insertions(+), 15 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index ff97787bd9..5ccfd61336 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -40,6 +40,7 @@ vxlan = Y
[rte_flow actions]
count = Y
drop = Y
+jump = Y
of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 8983178378..f18f3de042 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -304,3 +304,121 @@ nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
return 0;
}
+
+int
+nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower)
+{
+ uint16_t cnt;
+ uint32_t count = 0;
+ struct rte_mbuf *mbuf;
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_flower_cmsg_tun_ipv4_addr *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v4 tun addr");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_IPS, sizeof(*msg));
+
+ priv = app_fw_flower->flow_priv;
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (count >= NFP_FL_IPV4_ADDRS_MAX) {
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ PMD_DRV_LOG(ERR, "IPv4 offload exceeds limit.");
+ return -ERANGE;
+ }
+ msg->ipv4_addr[count] = entry->ipv4_addr;
+ count++;
+ }
+ msg->count = rte_cpu_to_be_32(count);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ uint16_t mac_idx,
+ bool is_del)
+{
+ uint16_t cnt;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_pre_tun_rule *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for pre tunnel rule");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_PRE_TUN_RULE, sizeof(*msg));
+
+ meta_tci = (struct nfp_flower_meta_tci *)((char *)nfp_flow_meta +
+ sizeof(struct nfp_fl_rule_metadata));
+ if (meta_tci->tci)
+ msg->vlan_tci = meta_tci->tci;
+ else
+ msg->vlan_tci = 0xffff;
+
+ if (is_del)
+ msg->flags = rte_cpu_to_be_32(NFP_TUN_PRE_TUN_RULE_DEL);
+
+ msg->port_idx = rte_cpu_to_be_16(mac_idx);
+ msg->host_ctx_id = nfp_flow_meta->host_ctx_id;
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
+int
+nfp_flower_cmsg_tun_mac_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_ether_addr *mac,
+ uint16_t mac_idx,
+ bool is_del)
+{
+ uint16_t cnt;
+ struct rte_mbuf *mbuf;
+ struct nfp_flower_cmsg_tun_mac *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for tunnel mac");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_MAC, sizeof(*msg));
+
+ msg->count = rte_cpu_to_be_16(1);
+ msg->index = rte_cpu_to_be_16(mac_idx);
+ rte_ether_addr_copy(mac, &msg->addr);
+ if (is_del)
+ msg->flags = rte_cpu_to_be_16(NFP_TUN_MAC_OFFLOAD_DEL_FLAG);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index d1e0562cf9..0933dacfb1 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -195,6 +195,91 @@ struct nfp_flower_cmsg_tun_neigh_v6 {
struct nfp_flower_tun_neigh common;
};
+#define NFP_TUN_PRE_TUN_RULE_DEL (1 << 0)
+#define NFP_TUN_PRE_TUN_IDX_BIT (1 << 3)
+#define NFP_TUN_PRE_TUN_IPV6_BIT (1 << 7)
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_PRE_TUN_RULE
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | FLAGS |
+ * +---------------------------------------------------------------+
+ * 1 | MAC_IDX | VLAN_ID |
+ * +---------------------------------------------------------------+
+ * 2 | HOST_CTX |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_pre_tun_rule {
+ rte_be32_t flags;
+ rte_be16_t port_idx;
+ rte_be16_t vlan_tci;
+ rte_be32_t host_ctx_id;
+};
+
+#define NFP_TUN_MAC_OFFLOAD_DEL_FLAG 0x2
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_MAC
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * Word +-----------------------+---+-+-+---------------+---------------+
+ * 0 | spare |NBI|D|F| Amount of MAC’s in this msg |
+ * +---------------+-------+---+-+-+---------------+---------------+
+ * 1 | Index 0 | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 2 | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ * 3 | Index 1 | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 4 | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ * ...
+ * +---------------+---------------+---------------+---------------+
+ * 2N-1 | Index N | MAC[0] | MAC[1] |
+ * +---------------+---------------+---------------+---------------+
+ * 2N | MAC[2] | MAC[3] | MAC[4] | MAC[5] |
+ * +---------------+---------------+---------------+---------------+
+ *
+ * F: Flush bit. Set if entire table must be flushed. Rest of info in cmsg
+ * will be ignored. Not implemented.
+ * D: Delete bit. Set if entry must be deleted instead of added
+ * NBI: Network Block Interface. Set to 0
+ * The amount of MAC’s per control message is limited only by the packet
+ * buffer size. A 2048B buffer can fit 253 MAC address and a 10240B buffer
+ * 1277 MAC addresses.
+ */
+struct nfp_flower_cmsg_tun_mac {
+ rte_be16_t flags;
+ rte_be16_t count; /**< Should always be 1 */
+ rte_be16_t index;
+ struct rte_ether_addr addr;
+};
+
+#define NFP_FL_IPV4_ADDRS_MAX 32
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_IPS
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | Number of IP Addresses |
+ * +---------------------------------------------------------------+
+ * 1 | IP Address #1 |
+ * +---------------------------------------------------------------+
+ * 2 | IP Address #2 |
+ * +---------------------------------------------------------------+
+ * | ... |
+ * +---------------------------------------------------------------+
+ * 32 | IP Address #32 |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_ipv4_addr {
+ rte_be32_t count;
+ rte_be32_t ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -716,5 +801,14 @@ int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v4 *payload);
int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v6 *payload);
+int nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower);
+int nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ uint16_t mac_idx,
+ bool is_del);
+int nfp_flower_cmsg_tun_mac_rule(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_ether_addr *mac,
+ uint16_t mac_idx,
+ bool is_del);
#endif /* _NFP_CMSG_H_ */
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 9ee02b0fb9..c088d24413 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -47,7 +47,8 @@ struct nfp_flow_item_proc {
/* Size in bytes for @p mask_support and @p mask_default. */
const unsigned int mask_sz;
/* Merge a pattern item into a flow rule handle. */
- int (*merge)(struct rte_flow *nfp_flow,
+ int (*merge)(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -63,6 +64,12 @@ struct nfp_mask_id_entry {
uint8_t mask_id;
};
+struct nfp_pre_tun_entry {
+ uint16_t mac_index;
+ uint16_t ref_cnt;
+ uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
+} __rte_aligned(32);
+
static inline struct nfp_flow_priv *
nfp_flow_dev_to_priv(struct rte_eth_dev *dev)
{
@@ -406,6 +413,83 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx)
return 0;
}
+__rte_unused static int
+nfp_tun_add_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+ struct nfp_ipv4_addr_entry *tmp_entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count++;
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ return 0;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ tmp_entry = rte_zmalloc("nfp_ipv4_off", sizeof(struct nfp_ipv4_addr_entry), 0);
+ if (tmp_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Mem error when offloading IP address.");
+ return -ENOMEM;
+ }
+
+ tmp_entry->ipv4_addr = ipv4;
+ tmp_entry->ref_count = 1;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_INSERT_HEAD(&priv->ipv4_off_list, tmp_entry, next);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ return nfp_flower_cmsg_tun_off_v4(app_fw_flower);
+}
+
+static int
+nfp_tun_del_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
+ rte_be32_t ipv4)
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv4_addr_entry *entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv4_off_lock);
+ LIST_FOREACH(entry, &priv->ipv4_off_list, next) {
+ if (entry->ipv4_addr == ipv4) {
+ entry->ref_count--;
+ if (entry->ref_count == 0) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+ return nfp_flower_cmsg_tun_off_v4(app_fw_flower);
+ }
+ break;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv4_off_lock);
+
+ return 0;
+}
+
+static int
+nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ struct nfp_flower_ipv4_udp_tun *udp4;
+
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+
+ return ret;
+}
+
static void
nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)
{
@@ -635,6 +719,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
case RTE_FLOW_ACTION_TYPE_COUNT:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_COUNT detected");
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_JUMP detected");
+ break;
case RTE_FLOW_ACTION_TYPE_PORT_ID:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_PORT_ID detected");
key_ls->act_size += sizeof(struct nfp_fl_act_output);
@@ -786,7 +873,8 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
}
static int
-nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
+nfp_flow_merge_eth(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -823,7 +911,8 @@ nfp_flow_merge_eth(__rte_unused struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_vlan(struct rte_flow *nfp_flow,
+nfp_flow_merge_vlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
__rte_unused char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -853,7 +942,8 @@ nfp_flow_merge_vlan(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_ipv4(struct rte_flow *nfp_flow,
+nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -914,7 +1004,8 @@ nfp_flow_merge_ipv4(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_ipv6(struct rte_flow *nfp_flow,
+nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -979,7 +1070,8 @@ nfp_flow_merge_ipv6(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_tcp(struct rte_flow *nfp_flow,
+nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1052,7 +1144,8 @@ nfp_flow_merge_tcp(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_udp(struct rte_flow *nfp_flow,
+nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1100,7 +1193,8 @@ nfp_flow_merge_udp(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
+nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1142,7 +1236,8 @@ nfp_flow_merge_sctp(struct rte_flow *nfp_flow,
}
static int
-nfp_flow_merge_vxlan(struct rte_flow *nfp_flow,
+nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1391,7 +1486,8 @@ nfp_flow_inner_item_get(const struct rte_flow_item items[],
}
static int
-nfp_flow_compile_item_proc(const struct rte_flow_item items[],
+nfp_flow_compile_item_proc(struct nfp_flower_representor *repr,
+ const struct rte_flow_item items[],
struct rte_flow *nfp_flow,
char **mbuf_off_exact,
char **mbuf_off_mask,
@@ -1402,6 +1498,7 @@ nfp_flow_compile_item_proc(const struct rte_flow_item items[],
bool continue_flag = true;
const struct rte_flow_item *item;
const struct nfp_flow_item_proc *proc_list;
+ struct nfp_app_fw_flower *app_fw_flower = repr->app_fw_flower;
proc_list = nfp_flow_item_proc_list;
for (item = items; item->type != RTE_FLOW_ITEM_TYPE_END && continue_flag; ++item) {
@@ -1437,14 +1534,14 @@ nfp_flow_compile_item_proc(const struct rte_flow_item items[],
break;
}
- ret = proc->merge(nfp_flow, mbuf_off_exact, item,
+ ret = proc->merge(app_fw_flower, nfp_flow, mbuf_off_exact, item,
proc, false, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d exact merge failed", item->type);
break;
}
- ret = proc->merge(nfp_flow, mbuf_off_mask, item,
+ ret = proc->merge(app_fw_flower, nfp_flow, mbuf_off_mask, item,
proc, true, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item %d mask merge failed", item->type);
@@ -1458,7 +1555,7 @@ nfp_flow_compile_item_proc(const struct rte_flow_item items[],
}
static int
-nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
+nfp_flow_compile_items(struct nfp_flower_representor *representor,
const struct rte_flow_item items[],
struct rte_flow *nfp_flow)
{
@@ -1489,7 +1586,7 @@ nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
is_outer_layer = false;
/* Go over items */
- ret = nfp_flow_compile_item_proc(loop_item, nfp_flow,
+ ret = nfp_flow_compile_item_proc(representor, loop_item, nfp_flow,
&mbuf_off_exact, &mbuf_off_mask, is_outer_layer);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow item compile failed.");
@@ -1498,7 +1595,7 @@ nfp_flow_compile_items(__rte_unused struct nfp_flower_representor *representor,
/* Go over inner items */
if (is_tun_flow) {
- ret = nfp_flow_compile_item_proc(items, nfp_flow,
+ ret = nfp_flow_compile_item_proc(representor, items, nfp_flow,
&mbuf_off_exact, &mbuf_off_mask, true);
if (ret != 0) {
PMD_DRV_LOG(ERR, "nfp flow outer item compile failed.");
@@ -1873,6 +1970,59 @@ nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_ipv4 *ipv4;
+ struct nfp_flower_mac_mpls *eth;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_tun_neigh_v4 payload;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ port = (struct nfp_flower_in_port *)(meta_tci + 1);
+ eth = (struct nfp_flower_mac_mpls *)(port + 1);
+
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls) +
+ sizeof(struct nfp_flower_tp_ports));
+ else
+ ipv4 = (struct nfp_flower_ipv4 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls));
+
+ tun = &nfp_flow->tun;
+ tun->payload.v6_flag = 0;
+ tun->payload.dst.dst_ipv4 = ipv4->ipv4_src;
+ tun->payload.src.src_ipv4 = ipv4->ipv4_dst;
+ memcpy(tun->payload.dst_addr, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ payload.dst_ipv4 = ipv4->ipv4_src;
+ payload.src_ipv4 = ipv4->ipv4_dst;
+ memcpy(payload.common.dst_mac, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flower_del_tun_neigh_v4(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
@@ -2090,6 +2240,200 @@ nfp_flow_action_vxlan_encap(struct nfp_app_fw_flower *app_fw_flower,
actions, vxlan_data, nfp_flow_meta, tun);
}
+static struct nfp_pre_tun_entry *
+nfp_pre_tun_table_search(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int index;
+ uint32_t hash_key;
+ struct nfp_pre_tun_entry *mac_index;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ index = rte_hash_lookup_data(priv->pre_tun_table, &hash_key, (void **)&mac_index);
+ if (index < 0) {
+ PMD_DRV_LOG(DEBUG, "Data NOT found in the hash table");
+ return NULL;
+ }
+
+ return mac_index;
+}
+
+static bool
+nfp_pre_tun_table_add(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int ret;
+ uint32_t hash_key;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ ret = rte_hash_add_key_data(priv->pre_tun_table, &hash_key, hash_data);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Add to pre tunnel table failed");
+ return false;
+ }
+
+ return true;
+}
+
+static bool
+nfp_pre_tun_table_delete(struct nfp_flow_priv *priv,
+ char *hash_data,
+ uint32_t hash_len)
+{
+ int ret;
+ uint32_t hash_key;
+
+ hash_key = rte_jhash(hash_data, hash_len, priv->hash_seed);
+ ret = rte_hash_del_key(priv->pre_tun_table, &hash_key);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Delete from pre tunnel table failed");
+ return false;
+ }
+
+ return true;
+}
+
+__rte_unused static int
+nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
+ uint16_t *index)
+{
+ uint16_t i;
+ uint32_t entry_size;
+ uint16_t mac_index = 1;
+ struct nfp_flow_priv *priv;
+ struct nfp_pre_tun_entry *entry;
+ struct nfp_pre_tun_entry *find_entry;
+
+ priv = repr->app_fw_flower->flow_priv;
+ if (priv->pre_tun_cnt >= NFP_TUN_PRE_TUN_RULE_LIMIT) {
+ PMD_DRV_LOG(ERR, "Pre tunnel table has full");
+ return -EINVAL;
+ }
+
+ entry_size = sizeof(struct nfp_pre_tun_entry);
+ entry = rte_zmalloc("nfp_pre_tun", entry_size, 0);
+ if (entry == NULL) {
+ PMD_DRV_LOG(ERR, "Memory alloc failed for pre tunnel table");
+ return -ENOMEM;
+ }
+
+ entry->ref_cnt = 1U;
+ memcpy(entry->mac_addr, repr->mac_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* 0 is considered a failed match */
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0)
+ continue;
+ entry->mac_index = i;
+ find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
+ if (find_entry != NULL) {
+ find_entry->ref_cnt++;
+ *index = find_entry->mac_index;
+ rte_free(entry);
+ return 0;
+ }
+ }
+
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0) {
+ priv->pre_tun_bitmap[i] = 1U;
+ mac_index = i;
+ break;
+ }
+ }
+
+ entry->mac_index = mac_index;
+ if (!nfp_pre_tun_table_add(priv, (char *)entry, entry_size)) {
+ rte_free(entry);
+ return -EINVAL;
+ }
+
+ *index = entry->mac_index;
+ priv->pre_tun_cnt++;
+ return 0;
+}
+
+static int
+nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
+ struct rte_flow *nfp_flow)
+{
+ uint16_t i;
+ int ret = 0;
+ uint32_t entry_size;
+ uint16_t nfp_mac_idx;
+ struct nfp_flow_priv *priv;
+ struct nfp_pre_tun_entry *entry;
+ struct nfp_pre_tun_entry *find_entry;
+ struct nfp_fl_rule_metadata *nfp_flow_meta;
+
+ priv = repr->app_fw_flower->flow_priv;
+ if (priv->pre_tun_cnt == 1)
+ return 0;
+
+ entry_size = sizeof(struct nfp_pre_tun_entry);
+ entry = rte_zmalloc("nfp_pre_tun", entry_size, 0);
+ if (entry == NULL) {
+ PMD_DRV_LOG(ERR, "Memory alloc failed for pre tunnel table");
+ return -ENOMEM;
+ }
+
+ entry->ref_cnt = 1U;
+ memcpy(entry->mac_addr, repr->mac_addr.addr_bytes, RTE_ETHER_ADDR_LEN);
+
+ /* 0 is considered a failed match */
+ for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) {
+ if (priv->pre_tun_bitmap[i] == 0)
+ continue;
+ entry->mac_index = i;
+ find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size);
+ if (find_entry != NULL) {
+ find_entry->ref_cnt--;
+ if (find_entry->ref_cnt != 0)
+ goto free_entry;
+ priv->pre_tun_bitmap[i] = 0;
+ break;
+ }
+ }
+
+ nfp_flow_meta = nfp_flow->payload.meta;
+ nfp_mac_idx = (find_entry->mac_index << 8) |
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
+ NFP_TUN_PRE_TUN_IDX_BIT;
+ ret = nfp_flower_cmsg_tun_mac_rule(repr->app_fw_flower, &repr->mac_addr,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send tunnel mac rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ find_entry->ref_cnt = 1U;
+ if (!nfp_pre_tun_table_delete(priv, (char *)find_entry, entry_size)) {
+ PMD_DRV_LOG(ERR, "Delete entry from pre tunnel table failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
+
+ rte_free(entry);
+ rte_free(find_entry);
+ priv->pre_tun_cnt--;
+
+free_entry:
+ rte_free(entry);
+
+ return ret;
+}
+
static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2125,6 +2469,9 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
case RTE_FLOW_ACTION_TYPE_COUNT:
PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_COUNT");
break;
+ case RTE_FLOW_ACTION_TYPE_JUMP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_JUMP");
+ break;
case RTE_FLOW_ACTION_TYPE_PORT_ID:
PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_PORT_ID");
ret = nfp_flow_action_output(position, action, nfp_flow_meta);
@@ -2561,6 +2908,15 @@ nfp_flow_destroy(struct rte_eth_dev *dev,
/* Delete the entry from nn table */
ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
break;
+ case NFP_FLOW_DECAP:
+ /* Delete the entry from nn table */
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ if (ret != 0)
+ goto exit;
+
+ /* Delete the entry in pre tunnel table */
+ ret = nfp_pre_tun_table_check_del(representor, nfp_flow);
+ break;
default:
PMD_DRV_LOG(ERR, "Invalid nfp flow type %d.", nfp_flow->type);
ret = -EINVAL;
@@ -2570,6 +2926,10 @@ nfp_flow_destroy(struct rte_eth_dev *dev,
if (ret != 0)
goto exit;
+ /* Delete the ip off */
+ if (nfp_flow_is_tunnel(nfp_flow))
+ nfp_tun_check_ip_off_del(representor, nfp_flow);
+
/* Delete the flow from hardware */
if (nfp_flow->install_flag) {
ret = nfp_flower_cmsg_flow_delete(app_fw_flower, nfp_flow);
@@ -2703,6 +3063,49 @@ nfp_flow_tunnel_item_release(__rte_unused struct rte_eth_dev *dev,
return 0;
}
+static int
+nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
+ struct rte_flow_tunnel *tunnel,
+ struct rte_flow_action **pmd_actions,
+ uint32_t *num_of_actions,
+ __rte_unused struct rte_flow_error *err)
+{
+ struct rte_flow_action *nfp_action;
+
+ nfp_action = rte_zmalloc("nfp_tun_action", sizeof(struct rte_flow_action), 0);
+ if (nfp_action == NULL) {
+ PMD_DRV_LOG(ERR, "Alloc memory for nfp tunnel action failed.");
+ return -ENOMEM;
+ }
+
+ switch (tunnel->type) {
+ default:
+ *pmd_actions = NULL;
+ *num_of_actions = 0;
+ rte_free(nfp_action);
+ break;
+ }
+
+ return 0;
+}
+
+static int
+nfp_flow_tunnel_action_decap_release(__rte_unused struct rte_eth_dev *dev,
+ struct rte_flow_action *pmd_actions,
+ uint32_t num_of_actions,
+ __rte_unused struct rte_flow_error *err)
+{
+ uint32_t i;
+ struct rte_flow_action *nfp_action;
+
+ for (i = 0; i < num_of_actions; i++) {
+ nfp_action = &pmd_actions[i];
+ rte_free(nfp_action);
+ }
+
+ return 0;
+}
+
static const struct rte_flow_ops nfp_flow_ops = {
.validate = nfp_flow_validate,
.create = nfp_flow_create,
@@ -2711,6 +3114,8 @@ static const struct rte_flow_ops nfp_flow_ops = {
.query = nfp_flow_query,
.tunnel_match = nfp_flow_tunnel_match,
.tunnel_item_release = nfp_flow_tunnel_item_release,
+ .tunnel_decap_set = nfp_flow_tunnel_decap_set,
+ .tunnel_action_decap_release = nfp_flow_tunnel_action_decap_release,
};
int
@@ -2755,6 +3160,15 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
.extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY,
};
+ struct rte_hash_parameters pre_tun_hash_params = {
+ .name = "pre_tunnel_table",
+ .entries = 32,
+ .hash_func = rte_jhash,
+ .socket_id = rte_socket_id(),
+ .key_len = sizeof(uint32_t),
+ .extra_flag = RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY,
+ };
+
ctx_count = nfp_rtsym_read_le(pf_dev->sym_tbl,
"CONFIG_FC_HOST_CTX_COUNT", &ret);
if (ret < 0) {
@@ -2835,11 +3249,27 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
goto free_mask_table;
}
+ /* pre tunnel table */
+ priv->pre_tun_cnt = 1;
+ pre_tun_hash_params.hash_func_init_val = priv->hash_seed;
+ priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params);
+ if (priv->pre_tun_table == NULL) {
+ PMD_INIT_LOG(ERR, "Pre tunnel table creation failed");
+ ret = -ENOMEM;
+ goto free_flow_table;
+ }
+
+ /* ipv4 off list */
+ rte_spinlock_init(&priv->ipv4_off_lock);
+ LIST_INIT(&priv->ipv4_off_list);
+
/* neighbor next list */
LIST_INIT(&priv->nn_list);
return 0;
+free_flow_table:
+ rte_hash_free(priv->flow_table);
free_mask_table:
rte_free(priv->mask_table);
free_stats:
@@ -2863,6 +3293,7 @@ nfp_flow_priv_uninit(struct nfp_pf_dev *pf_dev)
app_fw_flower = NFP_PRIV_TO_APP_FW_FLOWER(pf_dev->app_fw_priv);
priv = app_fw_flower->flow_priv;
+ rte_hash_free(priv->pre_tun_table);
rte_hash_free(priv->flow_table);
rte_hash_free(priv->mask_table);
rte_free(priv->stats);
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index 892dbc08f1..f536da2650 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -6,6 +6,7 @@
#ifndef _NFP_FLOW_H_
#define _NFP_FLOW_H_
+#include <sys/queue.h>
#include <rte_bitops.h>
#include <ethdev_driver.h>
@@ -93,6 +94,7 @@ enum nfp_flower_tun_type {
enum nfp_flow_type {
NFP_FLOW_COMMON,
NFP_FLOW_ENCAP,
+ NFP_FLOW_DECAP,
};
struct nfp_fl_key_ls {
@@ -169,6 +171,14 @@ struct nfp_fl_stats {
uint64_t bytes;
};
+struct nfp_ipv4_addr_entry {
+ LIST_ENTRY(nfp_ipv4_addr_entry) next;
+ rte_be32_t ipv4_addr;
+ int ref_count;
+};
+
+#define NFP_TUN_PRE_TUN_RULE_LIMIT 32
+
struct nfp_flow_priv {
uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */
uint64_t flower_version; /**< Flow version, always increase. */
@@ -184,6 +194,13 @@ struct nfp_flow_priv {
struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */
struct nfp_fl_stats *stats; /**< Store stats of flow. */
rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */
+ /* pre tunnel rule */
+ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */
+ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */
+ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */
+ /* IPv4 off */
+ LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
+ rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
/* neighbor next */
LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 09/26] net/nfp: prepare for IPv6 UDP tunnel decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (7 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 08/26] net/nfp: prepare for IPv4 UDP tunnel decap " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 10/26] net/nfp: support IPv4 VXLAN " Chaoyong He
` (17 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and functions, prepare for
the decap action of IPv6 UDP tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.c | 42 +++++++
drivers/net/nfp/flower/nfp_flower_cmsg.h | 24 ++++
drivers/net/nfp/nfp_flow.c | 145 ++++++++++++++++++++++-
drivers/net/nfp/nfp_flow.h | 9 ++
4 files changed, 217 insertions(+), 3 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index f18f3de042..76815cfe14 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -347,6 +347,48 @@ nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower)
return 0;
}
+int
+nfp_flower_cmsg_tun_off_v6(struct nfp_app_fw_flower *app_fw_flower)
+{
+ uint16_t cnt;
+ uint32_t count = 0;
+ struct rte_mbuf *mbuf;
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+ struct nfp_flower_cmsg_tun_ipv6_addr *msg;
+
+ mbuf = rte_pktmbuf_alloc(app_fw_flower->ctrl_pktmbuf_pool);
+ if (mbuf == NULL) {
+ PMD_DRV_LOG(DEBUG, "Failed to alloc mbuf for v6 tun addr");
+ return -ENOMEM;
+ }
+
+ msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_IPS_V6, sizeof(*msg));
+
+ priv = app_fw_flower->flow_priv;
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (count >= NFP_FL_IPV6_ADDRS_MAX) {
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ PMD_DRV_LOG(ERR, "IPv6 offload exceeds limit.");
+ return -ERANGE;
+ }
+ memcpy(&msg->ipv6_addr[count * 16], entry->ipv6_addr, 16UL);
+ count++;
+ }
+ msg->count = rte_cpu_to_be_32(count);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ cnt = nfp_flower_ctrl_vnic_xmit(app_fw_flower, mbuf);
+ if (cnt == 0) {
+ PMD_DRV_LOG(ERR, "Send cmsg through ctrl vnic failed.");
+ rte_pktmbuf_free(mbuf);
+ return -EIO;
+ }
+
+ return 0;
+}
+
int
nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 0933dacfb1..61f2f83fc9 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -280,6 +280,29 @@ struct nfp_flower_cmsg_tun_ipv4_addr {
rte_be32_t ipv4_addr[NFP_FL_IPV4_ADDRS_MAX];
};
+#define NFP_FL_IPV6_ADDRS_MAX 4
+
+/*
+ * NFP_FLOWER_CMSG_TYPE_TUN_IP_V6
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | Number of IP Addresses |
+ * +---------------------------------------------------------------+
+ * 1 | IP Address1 #1 |
+ * +---------------------------------------------------------------+
+ * 2 | IP Address1 #2 |
+ * +---------------------------------------------------------------+
+ * | ... |
+ * +---------------------------------------------------------------+
+ * 16 | IP Address4 #4 |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_cmsg_tun_ipv6_addr {
+ rte_be32_t count;
+ uint8_t ipv6_addr[NFP_FL_IPV6_ADDRS_MAX * 16];
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_FLOW_STATS
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -802,6 +825,7 @@ int nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
int nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_cmsg_tun_neigh_v6 *payload);
int nfp_flower_cmsg_tun_off_v4(struct nfp_app_fw_flower *app_fw_flower);
+int nfp_flower_cmsg_tun_off_v6(struct nfp_app_fw_flower *app_fw_flower);
int nfp_flower_cmsg_pre_tunnel_rule(struct nfp_app_fw_flower *app_fw_flower,
struct nfp_fl_rule_metadata *nfp_flow_meta,
uint16_t mac_idx,
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index c088d24413..ad484b95b7 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -476,16 +476,95 @@ nfp_tun_del_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
return 0;
}
+__rte_unused static int
+nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t ipv6[])
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+ struct nfp_ipv6_addr_entry *tmp_entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+ entry->ref_count++;
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ return 0;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ tmp_entry = rte_zmalloc("nfp_ipv6_off", sizeof(struct nfp_ipv6_addr_entry), 0);
+ if (tmp_entry == NULL) {
+ PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address.");
+ return -ENOMEM;
+ }
+ memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr));
+ tmp_entry->ref_count = 1;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_INSERT_HEAD(&priv->ipv6_off_list, tmp_entry, next);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ return nfp_flower_cmsg_tun_off_v6(app_fw_flower);
+}
+
+static int
+nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
+ uint8_t ipv6[])
+{
+ struct nfp_flow_priv *priv;
+ struct nfp_ipv6_addr_entry *entry;
+
+ priv = app_fw_flower->flow_priv;
+
+ rte_spinlock_lock(&priv->ipv6_off_lock);
+ LIST_FOREACH(entry, &priv->ipv6_off_list, next) {
+ if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) {
+ entry->ref_count--;
+ if (entry->ref_count == 0) {
+ LIST_REMOVE(entry, next);
+ rte_free(entry);
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+ return nfp_flower_cmsg_tun_off_v6(app_fw_flower);
+ }
+ break;
+ }
+ }
+ rte_spinlock_unlock(&priv->ipv6_off_lock);
+
+ return 0;
+}
+
static int
nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
struct rte_flow *nfp_flow)
{
int ret;
+ uint32_t key_layer2 = 0;
struct nfp_flower_ipv4_udp_tun *udp4;
+ struct nfp_flower_ipv6_udp_tun *udp6;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
- udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv4_udp_tun));
- ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
+
+ if (ext_meta != NULL)
+ key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
+
+ if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
+ udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_udp_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ } else {
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ }
return ret;
}
@@ -2078,6 +2157,59 @@ nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
+__rte_unused static int
+nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct rte_flow *nfp_flow)
+{
+ struct nfp_fl_tun *tmp;
+ struct nfp_fl_tun *tun;
+ struct nfp_flow_priv *priv;
+ struct nfp_flower_ipv6 *ipv6;
+ struct nfp_flower_mac_mpls *eth;
+ struct nfp_flower_in_port *port;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_cmsg_tun_neigh_v6 payload;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ port = (struct nfp_flower_in_port *)(meta_tci + 1);
+ eth = (struct nfp_flower_mac_mpls *)(port + 1);
+
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP)
+ ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls) +
+ sizeof(struct nfp_flower_tp_ports));
+ else
+ ipv6 = (struct nfp_flower_ipv6 *)((char *)eth +
+ sizeof(struct nfp_flower_mac_mpls));
+
+ tun = &nfp_flow->tun;
+ tun->payload.v6_flag = 1;
+ memcpy(tun->payload.dst.dst_ipv6, ipv6->ipv6_src, sizeof(tun->payload.dst.dst_ipv6));
+ memcpy(tun->payload.src.src_ipv6, ipv6->ipv6_dst, sizeof(tun->payload.src.src_ipv6));
+ memcpy(tun->payload.dst_addr, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(tun->payload.src_addr, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+
+ tun->ref_cnt = 1;
+ priv = app_fw_flower->flow_priv;
+ LIST_FOREACH(tmp, &priv->nn_list, next) {
+ if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
+ tmp->ref_cnt++;
+ return 0;
+ }
+ }
+
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+
+ memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(payload.dst_ipv6, ipv6->ipv6_src, sizeof(payload.dst_ipv6));
+ memcpy(payload.src_ipv6, ipv6->ipv6_dst, sizeof(payload.src_ipv6));
+ memcpy(payload.common.dst_mac, eth->mac_src, RTE_ETHER_ADDR_LEN);
+ memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
+ payload.common.port_id = port->in_port;
+
+ return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
+}
+
static int
nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
uint8_t *ipv6)
@@ -2401,6 +2533,9 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
nfp_mac_idx = (find_entry->mac_index << 8) |
NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
NFP_TUN_PRE_TUN_IDX_BIT;
+ if (nfp_flow->tun.payload.v6_flag != 0)
+ nfp_mac_idx |= NFP_TUN_PRE_TUN_IPV6_BIT;
+
ret = nfp_flower_cmsg_tun_mac_rule(repr->app_fw_flower, &repr->mac_addr,
nfp_mac_idx, true);
if (ret != 0) {
@@ -3263,6 +3398,10 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev)
rte_spinlock_init(&priv->ipv4_off_lock);
LIST_INIT(&priv->ipv4_off_list);
+ /* ipv6 off list */
+ rte_spinlock_init(&priv->ipv6_off_lock);
+ LIST_INIT(&priv->ipv6_off_list);
+
/* neighbor next list */
LIST_INIT(&priv->nn_list);
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index f536da2650..a6994e08ee 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -177,6 +177,12 @@ struct nfp_ipv4_addr_entry {
int ref_count;
};
+struct nfp_ipv6_addr_entry {
+ LIST_ENTRY(nfp_ipv6_addr_entry) next;
+ uint8_t ipv6_addr[16];
+ int ref_count;
+};
+
#define NFP_TUN_PRE_TUN_RULE_LIMIT 32
struct nfp_flow_priv {
@@ -201,6 +207,9 @@ struct nfp_flow_priv {
/* IPv4 off */
LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */
rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */
+ /* IPv6 off */
+ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */
+ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */
/* neighbor next */
LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */
};
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 10/26] net/nfp: support IPv4 VXLAN decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (8 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 09/26] net/nfp: prepare for IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 11/26] net/nfp: support IPv6 " Chaoyong He
` (16 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv4 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 75 +++++++++++++++++++++++++++++---
2 files changed, 71 insertions(+), 5 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 5ccfd61336..9ab840c88b 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -57,4 +57,5 @@ set_mac_src = Y
set_tp_dst = Y
set_tp_src = Y
set_ttl = Y
+vxlan_decap = Y
vxlan_encap = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ad484b95b7..e71292ff12 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -413,7 +413,7 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx)
return 0;
}
-__rte_unused static int
+static int
nfp_tun_add_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
rte_be32_t ipv4)
{
@@ -908,6 +908,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1315,7 +1318,7 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
}
static int
-nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1323,6 +1326,7 @@ nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
const struct rte_vxlan_hdr *hdr;
struct nfp_flower_ipv4_udp_tun *tun4;
struct nfp_flower_ipv6_udp_tun *tun6;
@@ -1351,6 +1355,8 @@ nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = hdr->vx_vni;
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
vxlan_end:
@@ -1360,7 +1366,7 @@ nfp_flow_merge_vxlan(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
else
*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
- return 0;
+ return ret;
}
/* Graph of supported items and associated process function */
@@ -2049,7 +2055,7 @@ nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -2427,7 +2433,7 @@ nfp_pre_tun_table_delete(struct nfp_flow_priv *priv,
return true;
}
-__rte_unused static int
+static int
nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr,
uint16_t *index)
{
@@ -2569,6 +2575,49 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
return ret;
}
+static int
+nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
+ __rte_unused const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct rte_flow *nfp_flow)
+{
+ int ret;
+ uint16_t nfp_mac_idx = 0;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_app_fw_flower *app_fw_flower;
+
+ ret = nfp_pre_tun_table_check_add(repr, &nfp_mac_idx);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Pre tunnel table add failed");
+ return -EINVAL;
+ }
+
+ nfp_mac_idx = (nfp_mac_idx << 8) |
+ NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
+ NFP_TUN_PRE_TUN_IDX_BIT;
+
+ app_fw_flower = repr->app_fw_flower;
+ ret = nfp_flower_cmsg_tun_mac_rule(app_fw_flower, &repr->mac_addr,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send tunnel mac rule failed");
+ return -EINVAL;
+ }
+
+ ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ return -EINVAL;
+ }
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
+ return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
+ else
+ return -ENOTSUP;
+}
+
static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2744,6 +2793,17 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
position += sizeof(struct nfp_fl_act_set_tun);
nfp_flow->type = NFP_FLOW_ENCAP;
break;
+ case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ PMD_DRV_LOG(DEBUG, "process action tunnel decap");
+ ret = nfp_flow_action_tunnel_decap(representor, action,
+ nfp_flow_meta, nfp_flow);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process tunnel decap");
+ return ret;
+ }
+ nfp_flow->type = NFP_FLOW_DECAP;
+ nfp_flow->install_flag = false;
+ break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
return -ENOTSUP;
@@ -3214,6 +3274,11 @@ nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
}
switch (tunnel->type) {
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP;
+ *pmd_actions = nfp_action;
+ *num_of_actions = 1;
+ break;
default:
*pmd_actions = NULL;
*num_of_actions = 0;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 11/26] net/nfp: support IPv6 VXLAN decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (9 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 10/26] net/nfp: support IPv4 VXLAN " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 12/26] net/nfp: support IPv4 GENEVE encap " Chaoyong He
` (15 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv6 VXLAN tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index e71292ff12..9e8073b0f8 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -476,7 +476,7 @@ nfp_tun_del_ipv4_off(struct nfp_app_fw_flower *app_fw_flower,
return 0;
}
-__rte_unused static int
+static int
nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower,
uint8_t ipv6[])
{
@@ -1352,6 +1352,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
NFP_FLOWER_LAYER2_TUN_IPV6)) {
tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
tun6->tun_id = hdr->vx_vni;
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = hdr->vx_vni;
@@ -2163,7 +2165,7 @@ nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
-__rte_unused static int
+static int
nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow)
{
@@ -2577,7 +2579,7 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
static int
nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
- __rte_unused const struct rte_flow_action *action,
+ const struct rte_flow_action *action,
struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
@@ -2595,6 +2597,8 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
nfp_mac_idx = (nfp_mac_idx << 8) |
NFP_FLOWER_CMSG_PORT_TYPE_OTHER_PORT |
NFP_TUN_PRE_TUN_IDX_BIT;
+ if (action->conf != NULL)
+ nfp_mac_idx |= NFP_TUN_PRE_TUN_IPV6_BIT;
app_fw_flower = repr->app_fw_flower;
ret = nfp_flower_cmsg_tun_mac_rule(app_fw_flower, &repr->mac_addr,
@@ -2615,7 +2619,7 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
else
- return -ENOTSUP;
+ return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
}
static int
@@ -2803,6 +2807,8 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
}
nfp_flow->type = NFP_FLOW_DECAP;
nfp_flow->install_flag = false;
+ if (action->conf != NULL)
+ nfp_flow->tun.payload.v6_flag = 1;
break;
default:
PMD_DRV_LOG(ERR, "Unsupported action type: %d", action->type);
@@ -3273,6 +3279,9 @@ nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
return -ENOMEM;
}
+ if (tunnel->is_ipv6)
+ nfp_action->conf = (void *)~0;
+
switch (tunnel->type) {
case RTE_FLOW_ITEM_TYPE_VXLAN:
nfp_action->type = RTE_FLOW_ACTION_TYPE_VXLAN_DECAP;
@@ -3300,6 +3309,7 @@ nfp_flow_tunnel_action_decap_release(__rte_unused struct rte_eth_dev *dev,
for (i = 0; i < num_of_actions; i++) {
nfp_action = &pmd_actions[i];
+ nfp_action->conf = NULL;
rte_free(nfp_action);
}
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 12/26] net/nfp: support IPv4 GENEVE encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (10 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 11/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 13/26] net/nfp: support IPv6 " Chaoyong He
` (14 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 107 +++++++++++++++++++++++++++++++
2 files changed, 108 insertions(+)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 9ab840c88b..deb27ee2d8 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -45,6 +45,7 @@ of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
+raw_encap = Y
port_id = Y
set_ipv4_dscp = Y
set_ipv4_dst = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 9e8073b0f8..7d19781bd9 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -38,6 +38,12 @@ struct vxlan_data {
__VA_ARGS__, RTE_FLOW_ITEM_TYPE_END, \
})
+/* Data length of various conf of raw encap action */
+#define GENEVE_V4_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv4_hdr) + \
+ sizeof(struct rte_udp_hdr) + \
+ sizeof(struct rte_flow_item_geneve))
+
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
/* Bit-mask for fields supported by this PMD. */
@@ -908,6 +914,11 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RAW_ENCAP detected");
+ key_ls->act_size += sizeof(struct nfp_fl_act_pre_tun);
+ key_ls->act_size += sizeof(struct nfp_fl_act_set_tun);
+ break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
break;
@@ -2622,6 +2633,88 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
}
+static int
+nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint64_t tun_id;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_udp *udp;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_geneve *geneve;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv4 = (const struct rte_flow_item_ipv4 *)(eth + 1);
+ udp = (const struct rte_flow_item_udp *)(ipv4 + 1);
+ geneve = (const struct rte_flow_item_geneve *)(udp + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tun_id = (geneve->vni[0] << 16) | (geneve->vni[1] << 8) | geneve->vni[2];
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GENEVE, tun_id,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_proto = geneve->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv4);
+}
+
+static int
+nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action *action,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ int ret;
+ size_t act_len;
+ size_t act_pre_size;
+ const struct rte_flow_action_raw_encap *raw_encap;
+
+ raw_encap = action->conf;
+ if (raw_encap->data == NULL) {
+ PMD_DRV_LOG(ERR, "The raw encap action conf is NULL.");
+ return -EINVAL;
+ }
+
+ /* Pre_tunnel action must be the first on action list.
+ * If other actions already exist, they need to be
+ * pushed forward.
+ */
+ act_len = act_data - actions;
+ if (act_len != 0) {
+ act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ memmove(actions + act_pre_size, actions, act_len);
+ }
+
+ switch (raw_encap->size) {
+ case GENEVE_V4_LEN:
+ ret = nfp_flow_action_geneve_encap_v4(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
+ default:
+ PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
+ ret = -EINVAL;
+ break;
+ }
+
+ return ret;
+}
+
static int
nfp_flow_compile_action(struct nfp_flower_representor *representor,
const struct rte_flow_action actions[],
@@ -2797,6 +2890,20 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
position += sizeof(struct nfp_fl_act_set_tun);
nfp_flow->type = NFP_FLOW_ENCAP;
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_ENCAP:
+ PMD_DRV_LOG(DEBUG, "Process RTE_FLOW_ACTION_TYPE_RAW_ENCAP");
+ ret = nfp_flow_action_raw_encap(representor->app_fw_flower,
+ position, action_data, action, nfp_flow_meta,
+ &nfp_flow->tun);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Failed when process"
+ " RTE_FLOW_ACTION_TYPE_RAW_ENCAP");
+ return ret;
+ }
+ position += sizeof(struct nfp_fl_act_pre_tun);
+ position += sizeof(struct nfp_fl_act_set_tun);
+ nfp_flow->type = NFP_FLOW_ENCAP;
+ break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "process action tunnel decap");
ret = nfp_flow_action_tunnel_decap(representor, action,
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 13/26] net/nfp: support IPv6 GENEVE encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (11 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 12/26] net/nfp: support IPv4 GENEVE encap " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 14/26] net/nfp: support IPv4 GENEVE flow item Chaoyong He
` (13 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action for IPv6 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 49 ++++++++++++++++++++++++++++++++++++++
1 file changed, 49 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 7d19781bd9..8416229f20 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -43,6 +43,10 @@ struct vxlan_data {
sizeof(struct rte_ipv4_hdr) + \
sizeof(struct rte_udp_hdr) + \
sizeof(struct rte_flow_item_geneve))
+#define GENEVE_V6_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv6_hdr) + \
+ sizeof(struct rte_udp_hdr) + \
+ sizeof(struct rte_flow_item_geneve))
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2672,6 +2676,47 @@ nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
tun, eth, ipv4);
}
+static int
+nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint8_t tos;
+ uint64_t tun_id;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_udp *udp;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_geneve *geneve;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv6 = (const struct rte_flow_item_ipv6 *)(eth + 1);
+ udp = (const struct rte_flow_item_udp *)(ipv6 + 1);
+ geneve = (const struct rte_flow_item_geneve *)(udp + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
+ tun_id = (geneve->vni[0] << 16) | (geneve->vni[1] << 8) | geneve->vni[2];
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GENEVE, tun_id,
+ ipv6->hdr.hop_limits, tos);
+ set_tun->tun_proto = geneve->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv6);
+}
+
static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
@@ -2706,6 +2751,10 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
ret = nfp_flow_action_geneve_encap_v4(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case GENEVE_V6_LEN:
+ ret = nfp_flow_action_geneve_encap_v6(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 14/26] net/nfp: support IPv4 GENEVE flow item
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (12 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 13/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 15/26] net/nfp: support IPv6 " Chaoyong He
` (12 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv4 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 75 +++++++++++++++++++++++++++++++-
2 files changed, 74 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index deb27ee2d8..2e215bb324 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -28,6 +28,7 @@ Usage doc = Y
[rte_flow items]
eth = Y
+geneve = Y
ipv4 = Y
ipv6 = Y
port_id = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 8416229f20..39ed279778 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -769,6 +769,23 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
return -EINVAL;
}
break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GENEVE detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_GENEVE;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -960,12 +977,22 @@ nfp_flow_key_layers_calculate(const struct rte_flow_item items[],
static bool
nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
{
+ uint32_t key_layer2;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_meta_tci *meta_tci;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN)
return true;
+ if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META))
+ return false;
+
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
+ key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GENEVE)
+ return true;
+
return false;
}
@@ -1386,6 +1413,39 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
return ret;
}
+static int
+nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ struct nfp_flower_ipv4_udp_tun *tun4;
+ const struct rte_flow_item_geneve *spec;
+ const struct rte_flow_item_geneve *mask;
+ const struct rte_flow_item_geneve *geneve;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge geneve: no item->spec!");
+ goto geneve_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ geneve = is_mask ? mask : spec;
+
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+
+geneve_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+
+ return 0;
+}
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
@@ -1474,7 +1534,8 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
.merge = nfp_flow_merge_tcp,
},
[RTE_FLOW_ITEM_TYPE_UDP] = {
- .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN),
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN,
+ RTE_FLOW_ITEM_TYPE_GENEVE),
.mask_support = &(const struct rte_flow_item_udp){
.hdr = {
.src_port = RTE_BE16(0xffff),
@@ -1507,6 +1568,15 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
.mask_sz = sizeof(struct rte_flow_item_vxlan),
.merge = nfp_flow_merge_vxlan,
},
+ [RTE_FLOW_ITEM_TYPE_GENEVE] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &(const struct rte_flow_item_geneve){
+ .vni = "\xff\xff\xff",
+ },
+ .mask_default = &rte_flow_item_geneve_mask,
+ .mask_sz = sizeof(struct rte_flow_item_geneve),
+ .merge = nfp_flow_merge_geneve,
+ },
};
static int
@@ -1563,7 +1633,8 @@ nfp_flow_item_check(const struct rte_flow_item *item,
static bool
nfp_flow_is_tun_item(const struct rte_flow_item *item)
{
- if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN)
+ if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
+ item->type == RTE_FLOW_ITEM_TYPE_GENEVE)
return true;
return false;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 15/26] net/nfp: support IPv6 GENEVE flow item
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (13 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 14/26] net/nfp: support IPv4 GENEVE flow item Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 16/26] net/nfp: support IPv4 GENEVE decap flow action Chaoyong He
` (11 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv6 GENEVE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 43 ++++++++++++++++++++++++++++++++------
1 file changed, 37 insertions(+), 6 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 39ed279778..50e5131f54 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -771,8 +771,9 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
break;
case RTE_FLOW_ITEM_TYPE_GENEVE:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GENEVE detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_GENEVE;
key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GENEVE;
@@ -784,6 +785,17 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
* in `struct nfp_flower_ipv4_udp_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_udp_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for GENEVE tunnel.");
+ return -EINVAL;
}
break;
default:
@@ -1415,7 +1427,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
static int
nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
const struct nfp_flow_item_proc *proc,
@@ -1423,9 +1435,16 @@ nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
__rte_unused bool is_outer_layer)
{
struct nfp_flower_ipv4_udp_tun *tun4;
+ struct nfp_flower_ipv6_udp_tun *tun6;
+ struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_geneve *spec;
const struct rte_flow_item_geneve *mask;
const struct rte_flow_item_geneve *geneve;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1436,12 +1455,24 @@ nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
mask = item->mask ? item->mask : proc->mask_default;
geneve = is_mask ? mask : spec;
- tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
- (geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+ tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+ } else {
+ tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+ tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
+ (geneve->vni[1] << 8) | (geneve->vni[2]));
+ }
geneve_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)) {
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun);
+ } else {
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
+ }
return 0;
}
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 16/26] net/nfp: support IPv4 GENEVE decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (14 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 15/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 17/26] net/nfp: support IPv6 " Chaoyong He
` (10 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv4 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 1 +
drivers/net/nfp/nfp_flow.c | 16 ++++++++++++++--
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index 2e215bb324..fe1cb971f1 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -46,6 +46,7 @@ of_pop_vlan = Y
of_push_vlan = Y
of_set_vlan_pcp = Y
of_set_vlan_vid = Y
+raw_decap = Y
raw_encap = Y
port_id = Y
set_ipv4_dscp = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 50e5131f54..46a047cd7b 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -955,6 +955,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[],
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_VXLAN_DECAP detected");
break;
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_RAW_DECAP detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Action type %d not supported.", action->type);
return -ENOTSUP;
@@ -1426,7 +1429,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower,
}
static int
-nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1434,6 +1437,7 @@ nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
struct nfp_flower_ipv4_udp_tun *tun4;
struct nfp_flower_ipv6_udp_tun *tun6;
struct nfp_flower_meta_tci *meta_tci;
@@ -1464,6 +1468,8 @@ nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
(geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
geneve_end:
@@ -1474,7 +1480,7 @@ nfp_flow_merge_geneve(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
*mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun);
}
- return 0;
+ return ret;
}
/* Graph of supported items and associated process function */
@@ -3056,6 +3062,7 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor,
nfp_flow->type = NFP_FLOW_ENCAP;
break;
case RTE_FLOW_ACTION_TYPE_VXLAN_DECAP:
+ case RTE_FLOW_ACTION_TYPE_RAW_DECAP:
PMD_DRV_LOG(DEBUG, "process action tunnel decap");
ret = nfp_flow_action_tunnel_decap(representor, action,
nfp_flow_meta, nfp_flow);
@@ -3546,6 +3553,11 @@ nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
*pmd_actions = nfp_action;
*num_of_actions = 1;
break;
+ case RTE_FLOW_ITEM_TYPE_GENEVE:
+ nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP;
+ *pmd_actions = nfp_action;
+ *num_of_actions = 1;
+ break;
default:
*pmd_actions = NULL;
*num_of_actions = 0;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 17/26] net/nfp: support IPv6 GENEVE decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (15 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 16/26] net/nfp: support IPv4 GENEVE decap flow action Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 18/26] net/nfp: support IPv4 NVGRE encap " Chaoyong He
` (9 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action for IPv6 GENEVE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 46a047cd7b..ee9c2a36e0 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1464,6 +1464,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
(geneve->vni[1] << 8) | (geneve->vni[2]));
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
tun4->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) |
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 18/26] net/nfp: support IPv4 NVGRE encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (16 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 17/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 19/26] net/nfp: support IPv6 " Chaoyong He
` (8 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action of IPv4 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 43 ++++++++++++++++++++++++++++++++++++++
1 file changed, 43 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index ee9c2a36e0..cc09ba45e2 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -47,6 +47,10 @@ struct vxlan_data {
sizeof(struct rte_ipv6_hdr) + \
sizeof(struct rte_udp_hdr) + \
sizeof(struct rte_flow_item_geneve))
+#define NVGRE_V4_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv4_hdr) + \
+ sizeof(struct rte_flow_item_gre) + \
+ sizeof(rte_be32_t)) /* gre key */
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2827,6 +2831,41 @@ nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
tun, eth, ipv6);
}
+static int
+nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_ipv4 *ipv4;
+ const struct rte_flow_item_gre *gre;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv4 = (const struct rte_flow_item_ipv4 *)(eth + 1);
+ gre = (const struct rte_flow_item_gre *)(ipv4 + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v4_process(pre_tun, ipv4->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
+ ipv4->hdr.time_to_live, ipv4->hdr.type_of_service);
+ set_tun->tun_proto = gre->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v4_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv4);
+}
+
static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
@@ -2865,6 +2904,10 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
ret = nfp_flow_action_geneve_encap_v6(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case NVGRE_V4_LEN:
+ ret = nfp_flow_action_nvgre_encap_v4(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 19/26] net/nfp: support IPv6 NVGRE encap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (17 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 18/26] net/nfp: support IPv4 NVGRE encap " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 20/26] net/nfp: prepare for IPv4 GRE tunnel decap " Chaoyong He
` (7 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of encap action of IPv6 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 45 ++++++++++++++++++++++++++++++++++++++
1 file changed, 45 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index cc09ba45e2..06115cc954 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -51,6 +51,10 @@ struct vxlan_data {
sizeof(struct rte_ipv4_hdr) + \
sizeof(struct rte_flow_item_gre) + \
sizeof(rte_be32_t)) /* gre key */
+#define NVGRE_V6_LEN (sizeof(struct rte_ether_hdr) + \
+ sizeof(struct rte_ipv6_hdr) + \
+ sizeof(struct rte_flow_item_gre) + \
+ sizeof(rte_be32_t)) /* gre key */
/* Process structure associated with a flow item */
struct nfp_flow_item_proc {
@@ -2866,6 +2870,43 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower,
tun, eth, ipv4);
}
+static int
+nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower,
+ char *act_data,
+ char *actions,
+ const struct rte_flow_action_raw_encap *raw_encap,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
+ struct nfp_fl_tun *tun)
+{
+ uint8_t tos;
+ const struct rte_ether_hdr *eth;
+ const struct rte_flow_item_ipv6 *ipv6;
+ const struct rte_flow_item_gre *gre;
+ struct nfp_fl_act_pre_tun *pre_tun;
+ struct nfp_fl_act_set_tun *set_tun;
+ size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun);
+ size_t act_set_size = sizeof(struct nfp_fl_act_set_tun);
+
+ eth = (const struct rte_ether_hdr *)raw_encap->data;
+ ipv6 = (const struct rte_flow_item_ipv6 *)(eth + 1);
+ gre = (const struct rte_flow_item_gre *)(ipv6 + 1);
+
+ pre_tun = (struct nfp_fl_act_pre_tun *)actions;
+ memset(pre_tun, 0, act_pre_size);
+ nfp_flow_pre_tun_v6_process(pre_tun, ipv6->hdr.dst_addr);
+
+ set_tun = (struct nfp_fl_act_set_tun *)(act_data + act_pre_size);
+ memset(set_tun, 0, act_set_size);
+ tos = (ipv6->hdr.vtc_flow >> RTE_IPV6_HDR_TC_SHIFT) & 0xff;
+ nfp_flow_set_tun_process(set_tun, NFP_FL_TUN_GRE, 0,
+ ipv6->hdr.hop_limits, tos);
+ set_tun->tun_proto = gre->protocol;
+
+ /* Send the tunnel neighbor cmsg to fw */
+ return nfp_flower_add_tun_neigh_v6_encap(app_fw_flower, nfp_flow_meta,
+ tun, eth, ipv6);
+}
+
static int
nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
char *act_data,
@@ -2908,6 +2949,10 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower,
ret = nfp_flow_action_nvgre_encap_v4(app_fw_flower, act_data,
actions, raw_encap, nfp_flow_meta, tun);
break;
+ case NVGRE_V6_LEN:
+ ret = nfp_flow_action_nvgre_encap_v6(app_fw_flower, act_data,
+ actions, raw_encap, nfp_flow_meta, tun);
+ break;
default:
PMD_DRV_LOG(ERR, "Not an valid raw encap action conf.");
ret = -EINVAL;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 20/26] net/nfp: prepare for IPv4 GRE tunnel decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (18 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 19/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 21/26] net/nfp: prepare for IPv6 " Chaoyong He
` (6 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and logics, prepare for
the decap action of IPv4 GRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 29 +++++++++++++++++
drivers/net/nfp/nfp_flow.c | 40 ++++++++++++++++++------
drivers/net/nfp/nfp_flow.h | 3 ++
3 files changed, 63 insertions(+), 9 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 61f2f83fc9..8bca7c2fa2 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -575,6 +575,35 @@ struct nfp_flower_ipv6_udp_tun {
rte_be32_t tun_id;
};
+/*
+ * Flow Frame GRE TUNNEL --> Tunnel details (6W/24B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_src |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv4_addr_dst |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv4_gre_tun {
+ struct nfp_flower_tun_ipv4 ipv4;
+ rte_be16_t tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be16_t reserved1;
+ rte_be16_t ethertype;
+ rte_be32_t tun_key;
+ rte_be32_t reserved2;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 06115cc954..115b9cbb92 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -564,6 +564,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
uint32_t key_layer2 = 0;
struct nfp_flower_ipv4_udp_tun *udp4;
struct nfp_flower_ipv6_udp_tun *udp6;
+ struct nfp_flower_ipv4_gre_tun *gre4;
struct nfp_flower_meta_tci *meta_tci;
struct nfp_flower_ext_meta *ext_meta = NULL;
@@ -579,9 +580,15 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
sizeof(struct nfp_flower_ipv6_udp_tun));
ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
} else {
- udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv4_udp_tun));
- ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+ gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_gre_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst);
+ } else {
+ udp4 = (struct nfp_flower_ipv4_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv4_udp_tun));
+ ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, udp4->ipv4.dst);
+ }
}
return ret;
@@ -1013,7 +1020,7 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow)
ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
- if (key_layer2 & NFP_FLOWER_LAYER2_GENEVE)
+ if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE))
return true;
return false;
@@ -1102,11 +1109,15 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv4 *spec;
const struct rte_flow_item_ipv4 *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
struct nfp_flower_ipv4_udp_tun *ipv4_udp_tun;
+ struct nfp_flower_ipv4_gre_tun *ipv4_gre_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
if (spec == NULL) {
@@ -1115,12 +1126,23 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
}
hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
- ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
- ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
- ipv4_udp_tun->ipv4.src = hdr->src_addr;
- ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_GRE)) {
+ ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+
+ ipv4_gre_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_gre_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_gre_tun->ipv4.src = hdr->src_addr;
+ ipv4_gre_tun->ipv4.dst = hdr->dst_addr;
+ } else {
+ ipv4_udp_tun = (struct nfp_flower_ipv4_udp_tun *)*mbuf_off;
+
+ ipv4_udp_tun->ip_ext.tos = hdr->type_of_service;
+ ipv4_udp_tun->ip_ext.ttl = hdr->time_to_live;
+ ipv4_udp_tun->ipv4.src = hdr->src_addr;
+ ipv4_udp_tun->ipv4.dst = hdr->dst_addr;
+ }
} else {
if (spec == NULL) {
PMD_DRV_LOG(DEBUG, "nfp flow merge ipv4: no item->spec!");
diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h
index a6994e08ee..b0c2aaf6d8 100644
--- a/drivers/net/nfp/nfp_flow.h
+++ b/drivers/net/nfp/nfp_flow.h
@@ -49,6 +49,9 @@
#define NFP_FL_SC_ACT_POPV 0x6A000000
#define NFP_FL_SC_ACT_NULL 0x00000000
+/* GRE Tunnel flags */
+#define NFP_FL_GRE_FLAG_KEY (1 << 2)
+
/* Action opcodes */
#define NFP_FL_ACTION_OPCODE_OUTPUT 0
#define NFP_FL_ACTION_OPCODE_PUSH_VLAN 1
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 21/26] net/nfp: prepare for IPv6 GRE tunnel decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (19 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 20/26] net/nfp: prepare for IPv4 GRE tunnel decap " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 22/26] net/nfp: support IPv4 NVGRE flow item Chaoyong He
` (5 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the related data structure and logics, prepare for
the decap action of IPv6 GRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_cmsg.h | 41 ++++++++++++++++++++
drivers/net/nfp/nfp_flow.c | 49 ++++++++++++++++++------
2 files changed, 78 insertions(+), 12 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index 8bca7c2fa2..a48da67222 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -604,6 +604,47 @@ struct nfp_flower_ipv4_gre_tun {
rte_be32_t reserved2;
};
+/*
+ * Flow Frame GRE TUNNEL V6 --> Tunnel details (12W/48B)
+ * -----------------------------------------------------------------
+ * 3 2 1
+ * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_src, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 31 - 0 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 63 - 32 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 95 - 64 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | ipv6_addr_dst, 127 - 96 |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | tun_flags | tos | ttl |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved | Ethertype |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Key |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ * | Reserved |
+ * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
+ */
+struct nfp_flower_ipv6_gre_tun {
+ struct nfp_flower_tun_ipv6 ipv6;
+ rte_be16_t tun_flags;
+ struct nfp_flower_tun_ip_ext ip_ext;
+ rte_be16_t reserved1;
+ rte_be16_t ethertype;
+ rte_be32_t tun_key;
+ rte_be32_t reserved2;
+};
+
struct nfp_fl_act_head {
uint8_t jump_id;
uint8_t len_lw;
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 115b9cbb92..0353eed499 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -565,6 +565,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
struct nfp_flower_ipv4_udp_tun *udp4;
struct nfp_flower_ipv6_udp_tun *udp6;
struct nfp_flower_ipv4_gre_tun *gre4;
+ struct nfp_flower_ipv6_gre_tun *gre6;
struct nfp_flower_meta_tci *meta_tci;
struct nfp_flower_ext_meta *ext_meta = NULL;
@@ -576,9 +577,15 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,
key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2);
if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) {
- udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
- sizeof(struct nfp_flower_ipv6_udp_tun));
- ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
+ gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_gre_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst);
+ } else {
+ udp6 = (struct nfp_flower_ipv6_udp_tun *)(nfp_flow->payload.mask_data -
+ sizeof(struct nfp_flower_ipv6_udp_tun));
+ ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst);
+ }
} else {
if (key_layer2 & NFP_FLOWER_LAYER2_GRE) {
gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data -
@@ -1186,11 +1193,15 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
struct nfp_flower_meta_tci *meta_tci;
const struct rte_flow_item_ipv6 *spec;
const struct rte_flow_item_ipv6 *mask;
+ struct nfp_flower_ext_meta *ext_meta = NULL;
struct nfp_flower_ipv6_udp_tun *ipv6_udp_tun;
+ struct nfp_flower_ipv6_gre_tun *ipv6_gre_tun;
spec = item->spec;
mask = item->mask ? item->mask : proc->mask_default;
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) {
if (spec == NULL) {
@@ -1199,15 +1210,29 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
}
hdr = is_mask ? &mask->hdr : &spec->hdr;
- ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
-
- ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
- RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
- ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
- memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
- sizeof(ipv6_udp_tun->ipv6.ipv6_src));
- memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
- sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+
+ if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_GRE)) {
+ ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+
+ ipv6_gre_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_gre_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_gre_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_gre_tun->ipv6.ipv6_src));
+ memcpy(ipv6_gre_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_gre_tun->ipv6.ipv6_dst));
+ } else {
+ ipv6_udp_tun = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off;
+
+ ipv6_udp_tun->ip_ext.tos = (hdr->vtc_flow &
+ RTE_IPV6_HDR_TC_MASK) >> RTE_IPV6_HDR_TC_SHIFT;
+ ipv6_udp_tun->ip_ext.ttl = hdr->hop_limits;
+ memcpy(ipv6_udp_tun->ipv6.ipv6_src, hdr->src_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_src));
+ memcpy(ipv6_udp_tun->ipv6.ipv6_dst, hdr->dst_addr,
+ sizeof(ipv6_udp_tun->ipv6.ipv6_dst));
+ }
} else {
if (spec == NULL) {
PMD_DRV_LOG(DEBUG, "nfp flow merge ipv6: no item->spec!");
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 22/26] net/nfp: support IPv4 NVGRE flow item
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (20 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 21/26] net/nfp: prepare for IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 23/26] net/nfp: support IPv6 " Chaoyong He
` (4 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv4 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
doc/guides/nics/features/nfp.ini | 2 +
drivers/net/nfp/nfp_flow.c | 99 +++++++++++++++++++++++++++++++-
2 files changed, 99 insertions(+), 2 deletions(-)
diff --git a/doc/guides/nics/features/nfp.ini b/doc/guides/nics/features/nfp.ini
index fe1cb971f1..9e075a680b 100644
--- a/doc/guides/nics/features/nfp.ini
+++ b/doc/guides/nics/features/nfp.ini
@@ -29,6 +29,8 @@ Usage doc = Y
[rte_flow items]
eth = Y
geneve = Y
+gre = Y
+gre_key = Y
ipv4 = Y
ipv6 = Y
port_id = Y
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 0353eed499..226fc7d590 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -820,6 +820,26 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
return -EINVAL;
}
break;
+ case RTE_FLOW_ITEM_TYPE_GRE:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE detected");
+ /* Clear IPv4 bits */
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->tun_type = NFP_FL_TUN_GRE;
+ key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GRE;
+ key_ls->key_size += sizeof(struct nfp_flower_ext_meta);
+ if (outer_ip4_flag) {
+ key_ls->key_size += sizeof(struct nfp_flower_ipv4_gre_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv4_gre_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ }
+ break;
+ case RTE_FLOW_ITEM_TYPE_GRE_KEY:
+ PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE_KEY detected");
+ break;
default:
PMD_DRV_LOG(ERR, "Item type %d not supported.", item->type);
return -ENOTSUP;
@@ -1540,6 +1560,62 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
return ret;
}
+static int
+nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ __rte_unused const struct rte_flow_item *item,
+ __rte_unused const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ struct nfp_flower_ipv4_gre_tun *tun4;
+
+ /* NVGRE is the only supported GRE tunnel type */
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun4->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun4->ethertype = rte_cpu_to_be_16(0x6558);
+
+ return 0;
+}
+
+static int
+nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+ __rte_unused struct rte_flow *nfp_flow,
+ char **mbuf_off,
+ const struct rte_flow_item *item,
+ __rte_unused const struct nfp_flow_item_proc *proc,
+ bool is_mask,
+ __rte_unused bool is_outer_layer)
+{
+ rte_be32_t tun_key;
+ const rte_be32_t *spec;
+ const rte_be32_t *mask;
+ struct nfp_flower_ipv4_gre_tun *tun4;
+
+ spec = item->spec;
+ if (spec == NULL) {
+ PMD_DRV_LOG(DEBUG, "nfp flow merge gre key: no item->spec!");
+ goto gre_key_end;
+ }
+
+ mask = item->mask ? item->mask : proc->mask_default;
+ tun_key = is_mask ? *mask : *spec;
+
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ tun4->tun_key = tun_key;
+ tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+
+gre_key_end:
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
+
+ return 0;
+}
+
+const rte_be32_t nfp_flow_item_gre_key = 0xffffffff;
+
/* Graph of supported items and associated process function */
static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_END] = {
@@ -1580,7 +1656,8 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_IPV4] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_TCP,
RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_SCTP),
+ RTE_FLOW_ITEM_TYPE_SCTP,
+ RTE_FLOW_ITEM_TYPE_GRE),
.mask_support = &(const struct rte_flow_item_ipv4){
.hdr = {
.type_of_service = 0xff,
@@ -1671,6 +1748,23 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
.mask_sz = sizeof(struct rte_flow_item_geneve),
.merge = nfp_flow_merge_geneve,
},
+ [RTE_FLOW_ITEM_TYPE_GRE] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY),
+ .mask_support = &(const struct rte_flow_item_gre){
+ .c_rsvd0_ver = RTE_BE16(0xa000),
+ .protocol = RTE_BE16(0xffff),
+ },
+ .mask_default = &rte_flow_item_gre_mask,
+ .mask_sz = sizeof(struct rte_flow_item_gre),
+ .merge = nfp_flow_merge_gre,
+ },
+ [RTE_FLOW_ITEM_TYPE_GRE_KEY] = {
+ .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH),
+ .mask_support = &nfp_flow_item_gre_key,
+ .mask_default = &nfp_flow_item_gre_key,
+ .mask_sz = sizeof(rte_be32_t),
+ .merge = nfp_flow_merge_gre_key,
+ },
};
static int
@@ -1728,7 +1822,8 @@ static bool
nfp_flow_is_tun_item(const struct rte_flow_item *item)
{
if (item->type == RTE_FLOW_ITEM_TYPE_VXLAN ||
- item->type == RTE_FLOW_ITEM_TYPE_GENEVE)
+ item->type == RTE_FLOW_ITEM_TYPE_GENEVE ||
+ item->type == RTE_FLOW_ITEM_TYPE_GRE_KEY)
return true;
return false;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 23/26] net/nfp: support IPv6 NVGRE flow item
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (21 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 22/26] net/nfp: support IPv4 NVGRE flow item Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 24/26] net/nfp: support IPv4 NVGRE decap flow action Chaoyong He
` (3 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the corresponding logics to support the offload of
IPv6 NVGRE item.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 73 ++++++++++++++++++++++++++++++--------
1 file changed, 59 insertions(+), 14 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 226fc7d590..78af7bcf0c 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -822,8 +822,9 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
break;
case RTE_FLOW_ITEM_TYPE_GRE:
PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_GRE detected");
- /* Clear IPv4 bits */
+ /* Clear IPv4 and IPv6 bits */
key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV4;
+ key_ls->key_layer &= ~NFP_FLOWER_LAYER_IPV6;
key_ls->tun_type = NFP_FL_TUN_GRE;
key_ls->key_layer |= NFP_FLOWER_LAYER_EXT_META;
key_ls->key_layer_two |= NFP_FLOWER_LAYER2_GRE;
@@ -835,6 +836,17 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[],
* in `struct nfp_flower_ipv4_gre_tun`
*/
key_ls->key_size -= sizeof(struct nfp_flower_ipv4);
+ } else if (outer_ip6_flag) {
+ key_ls->key_layer_two |= NFP_FLOWER_LAYER2_TUN_IPV6;
+ key_ls->key_size += sizeof(struct nfp_flower_ipv6_gre_tun);
+ /*
+ * The outer l3 layer information is
+ * in `struct nfp_flower_ipv6_gre_tun`
+ */
+ key_ls->key_size -= sizeof(struct nfp_flower_ipv6);
+ } else {
+ PMD_DRV_LOG(ERR, "No outer IP layer for GRE tunnel.");
+ return -1;
}
break;
case RTE_FLOW_ITEM_TYPE_GRE_KEY:
@@ -1562,38 +1574,59 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower,
static int
nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
__rte_unused const struct rte_flow_item *item,
__rte_unused const struct nfp_flow_item_proc *proc,
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_ipv4_gre_tun *tun4;
+ struct nfp_flower_ipv6_gre_tun *tun6;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
/* NVGRE is the only supported GRE tunnel type */
- tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
- if (is_mask)
- tun4->ethertype = rte_cpu_to_be_16(~0);
- else
- tun4->ethertype = rte_cpu_to_be_16(0x6558);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6) {
+ tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun6->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun6->ethertype = rte_cpu_to_be_16(0x6558);
+ } else {
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ if (is_mask)
+ tun4->ethertype = rte_cpu_to_be_16(~0);
+ else
+ tun4->ethertype = rte_cpu_to_be_16(0x6558);
+ }
return 0;
}
static int
nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
- __rte_unused struct rte_flow *nfp_flow,
+ struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
- __rte_unused const struct nfp_flow_item_proc *proc,
+ const struct nfp_flow_item_proc *proc,
bool is_mask,
__rte_unused bool is_outer_layer)
{
rte_be32_t tun_key;
const rte_be32_t *spec;
const rte_be32_t *mask;
+ struct nfp_flower_meta_tci *meta_tci;
+ struct nfp_flower_ext_meta *ext_meta;
struct nfp_flower_ipv4_gre_tun *tun4;
+ struct nfp_flower_ipv6_gre_tun *tun6;
+
+ meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
+ ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1);
spec = item->spec;
if (spec == NULL) {
@@ -1604,12 +1637,23 @@ nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
mask = item->mask ? item->mask : proc->mask_default;
tun_key = is_mask ? *mask : *spec;
- tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
- tun4->tun_key = tun_key;
- tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6) {
+ tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
+ tun6->tun_key = tun_key;
+ tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ } else {
+ tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
+ tun4->tun_key = tun_key;
+ tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ }
gre_key_end:
- *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
+ if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) &
+ NFP_FLOWER_LAYER2_TUN_IPV6)
+ *mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun);
+ else
+ *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
return 0;
}
@@ -1675,7 +1719,8 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = {
[RTE_FLOW_ITEM_TYPE_IPV6] = {
.next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_TCP,
RTE_FLOW_ITEM_TYPE_UDP,
- RTE_FLOW_ITEM_TYPE_SCTP),
+ RTE_FLOW_ITEM_TYPE_SCTP,
+ RTE_FLOW_ITEM_TYPE_GRE),
.mask_support = &(const struct rte_flow_item_ipv6){
.hdr = {
.vtc_flow = RTE_BE32(0x0ff00000),
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 24/26] net/nfp: support IPv4 NVGRE decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (22 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 23/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 25/26] net/nfp: support IPv6 " Chaoyong He
` (2 subsequent siblings)
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action of IPv4 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 78af7bcf0c..d666446edf 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1609,7 +1609,7 @@ nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
}
static int
-nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
+nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
struct rte_flow *nfp_flow,
char **mbuf_off,
const struct rte_flow_item *item,
@@ -1617,6 +1617,7 @@ nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
bool is_mask,
__rte_unused bool is_outer_layer)
{
+ int ret = 0;
rte_be32_t tun_key;
const rte_be32_t *spec;
const rte_be32_t *mask;
@@ -1646,6 +1647,8 @@ nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
tun4->tun_key = tun_key;
tun4->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (!is_mask)
+ ret = nfp_tun_add_ipv4_off(app_fw_flower, tun4->ipv4.dst);
}
gre_key_end:
@@ -1655,7 +1658,7 @@ nfp_flow_merge_gre_key(__rte_unused struct nfp_app_fw_flower *app_fw_flower,
else
*mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun);
- return 0;
+ return ret;
}
const rte_be32_t nfp_flow_item_gre_key = 0xffffffff;
@@ -3831,6 +3834,7 @@ nfp_flow_tunnel_decap_set(__rte_unused struct rte_eth_dev *dev,
*num_of_actions = 1;
break;
case RTE_FLOW_ITEM_TYPE_GENEVE:
+ case RTE_FLOW_ITEM_TYPE_GRE:
nfp_action->type = RTE_FLOW_ACTION_TYPE_RAW_DECAP;
*pmd_actions = nfp_action;
*num_of_actions = 1;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 25/26] net/nfp: support IPv6 NVGRE decap flow action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (23 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 24/26] net/nfp: support IPv4 NVGRE decap flow action Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 7:59 ` [PATCH v3 26/26] net/nfp: support new solution for tunnel decap action Chaoyong He
2022-10-25 11:42 ` [PATCH v3 00/26] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
Add the offload support of decap action of IPv6 NVGRE tunnel.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/nfp_flow.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index d666446edf..93a9233b8b 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -1643,6 +1643,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower,
tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off;
tun6->tun_key = tun_key;
tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY);
+ if (!is_mask)
+ ret = nfp_tun_add_ipv6_off(app_fw_flower, tun6->ipv6.ipv6_dst);
} else {
tun4 = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off;
tun4->tun_key = tun_key;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* [PATCH v3 26/26] net/nfp: support new solution for tunnel decap action
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (24 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 25/26] net/nfp: support IPv6 " Chaoyong He
@ 2022-10-25 7:59 ` Chaoyong He
2022-10-25 11:42 ` [PATCH v3 00/26] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
26 siblings, 0 replies; 88+ messages in thread
From: Chaoyong He @ 2022-10-25 7:59 UTC (permalink / raw)
To: dev; +Cc: oss-drivers, niklas.soderlund, Chaoyong He
The new version of flower firmware application add the support of
a new tunnel decap action solution.
It changes the structure of tunnel neighbor, and use a feature flag
to indicate which tunnel decap action solution is used.
Add the logic of read extra features from firmware, and store it in
the app private structure.
Adjust the data structure and related logic to make the PMD support
both version of tunnel decap action solutions.
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>
---
drivers/net/nfp/flower/nfp_flower.c | 14 +++
drivers/net/nfp/flower/nfp_flower.h | 24 +++++
drivers/net/nfp/flower/nfp_flower_cmsg.c | 4 +
drivers/net/nfp/flower/nfp_flower_cmsg.h | 17 ++++
drivers/net/nfp/nfp_flow.c | 118 +++++++++++++++++++----
5 files changed, 157 insertions(+), 20 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c
index 41b0fe2337..aa8199dde2 100644
--- a/drivers/net/nfp/flower/nfp_flower.c
+++ b/drivers/net/nfp/flower/nfp_flower.c
@@ -1074,6 +1074,8 @@ int
nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
{
int ret;
+ int err;
+ uint64_t ext_features;
unsigned int numa_node;
struct nfp_net_hw *pf_hw;
struct nfp_net_hw *ctrl_hw;
@@ -1115,6 +1117,18 @@ nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
goto vnic_cleanup;
}
+ /* Read the extra features */
+ ext_features = nfp_rtsym_read_le(pf_dev->sym_tbl, "_abi_flower_extra_features",
+ &err);
+ if (err != 0) {
+ PMD_INIT_LOG(ERR, "Couldn't read extra features from fw");
+ ret = -EIO;
+ goto pf_cpp_area_cleanup;
+ }
+
+ /* Store the extra features */
+ app_fw_flower->ext_features = ext_features;
+
/* Fill in the PF vNIC and populate app struct */
app_fw_flower->pf_hw = pf_hw;
pf_hw->ctrl_bar = pf_dev->ctrl_bar;
diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h
index f199741190..c05a761a95 100644
--- a/drivers/net/nfp/flower/nfp_flower.h
+++ b/drivers/net/nfp/flower/nfp_flower.h
@@ -8,6 +8,21 @@
#include "../nfp_common.h"
+/* Extra features bitmap. */
+#define NFP_FL_FEATS_GENEVE RTE_BIT64(0)
+#define NFP_FL_NBI_MTU_SETTING RTE_BIT64(1)
+#define NFP_FL_FEATS_GENEVE_OPT RTE_BIT64(2)
+#define NFP_FL_FEATS_VLAN_PCP RTE_BIT64(3)
+#define NFP_FL_FEATS_VF_RLIM RTE_BIT64(4)
+#define NFP_FL_FEATS_FLOW_MOD RTE_BIT64(5)
+#define NFP_FL_FEATS_PRE_TUN_RULES RTE_BIT64(6)
+#define NFP_FL_FEATS_IPV6_TUN RTE_BIT64(7)
+#define NFP_FL_FEATS_VLAN_QINQ RTE_BIT64(8)
+#define NFP_FL_FEATS_QOS_PPS RTE_BIT64(9)
+#define NFP_FL_FEATS_QOS_METER RTE_BIT64(10)
+#define NFP_FL_FEATS_DECAP_V2 RTE_BIT64(11)
+#define NFP_FL_FEATS_HOST_ACK RTE_BIT64(31)
+
/*
* Flower fallback and ctrl path always adds and removes
* 8 bytes of prepended data. Tx descriptors must point
@@ -57,9 +72,18 @@ struct nfp_app_fw_flower {
/* service id of ctrl vnic service */
uint32_t ctrl_vnic_id;
+ /* Flower extra features */
+ uint64_t ext_features;
+
struct nfp_flow_priv *flow_priv;
};
+static inline bool
+nfp_flower_support_decap_v2(const struct nfp_app_fw_flower *app_fw_flower)
+{
+ return app_fw_flower->ext_features & NFP_FL_FEATS_DECAP_V2;
+}
+
int nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev);
int nfp_secondary_init_app_fw_flower(struct nfp_cpp *cpp);
uint16_t nfp_flower_pf_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c
index 76815cfe14..babdd8e36b 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.c
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c
@@ -263,6 +263,8 @@ nfp_flower_cmsg_tun_neigh_v4_rule(struct nfp_app_fw_flower *app_fw_flower,
}
msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v4);
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ msg_len -= sizeof(struct nfp_flower_tun_neigh_ext);
msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH, msg_len);
memcpy(msg, payload, msg_len);
@@ -292,6 +294,8 @@ nfp_flower_cmsg_tun_neigh_v6_rule(struct nfp_app_fw_flower *app_fw_flower,
}
msg_len = sizeof(struct nfp_flower_cmsg_tun_neigh_v6);
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ msg_len -= sizeof(struct nfp_flower_tun_neigh_ext);
msg = nfp_flower_cmsg_init(mbuf, NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V6, msg_len);
memcpy(msg, payload, msg_len);
diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h
index a48da67222..04601cb0bd 100644
--- a/drivers/net/nfp/flower/nfp_flower_cmsg.h
+++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h
@@ -135,6 +135,21 @@ struct nfp_flower_tun_neigh {
rte_be32_t port_id;
};
+/*
+ * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
+ * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * +---------------------------------------------------------------+
+ * 0 | VLAN_TPID | VLAN_ID |
+ * +---------------------------------------------------------------+
+ * 1 | HOST_CTX |
+ * +---------------------------------------------------------------+
+ */
+struct nfp_flower_tun_neigh_ext {
+ rte_be16_t vlan_tpid;
+ rte_be16_t vlan_tci;
+ rte_be32_t host_ctx;
+};
+
/*
* NFP_FLOWER_CMSG_TYPE_TUN_NEIGH_V4
* Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
@@ -157,6 +172,7 @@ struct nfp_flower_cmsg_tun_neigh_v4 {
rte_be32_t dst_ipv4;
rte_be32_t src_ipv4;
struct nfp_flower_tun_neigh common;
+ struct nfp_flower_tun_neigh_ext ext;
};
/*
@@ -193,6 +209,7 @@ struct nfp_flower_cmsg_tun_neigh_v6 {
uint8_t dst_ipv6[16];
uint8_t src_ipv6[16];
struct nfp_flower_tun_neigh common;
+ struct nfp_flower_tun_neigh_ext ext;
};
#define NFP_TUN_PRE_TUN_RULE_DEL (1 << 0)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 93a9233b8b..af56e7bef2 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -2384,8 +2384,10 @@ nfp_flower_add_tun_neigh_v4_encap(struct nfp_app_fw_flower *app_fw_flower,
static int
nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
+ bool exists = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
struct nfp_flow_priv *priv;
@@ -2419,11 +2421,17 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
LIST_FOREACH(tmp, &priv->nn_list, next) {
if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
tmp->ref_cnt++;
- return 0;
+ exists = true;
+ break;
}
}
- LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ if (exists) {
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ return 0;
+ } else {
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ }
memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
payload.dst_ipv4 = ipv4->ipv4_src;
@@ -2432,6 +2440,17 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower,
memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
payload.common.port_id = port->in_port;
+ if (nfp_flower_support_decap_v2(app_fw_flower)) {
+ if (meta_tci->tci != 0) {
+ payload.ext.vlan_tci = meta_tci->tci;
+ payload.ext.vlan_tpid = 0x88a8;
+ } else {
+ payload.ext.vlan_tci = 0xffff;
+ payload.ext.vlan_tpid = 0xffff;
+ }
+ payload.ext.host_ctx = nfp_flow_meta->host_ctx_id;
+ }
+
return nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &payload);
}
@@ -2492,8 +2511,10 @@ nfp_flower_add_tun_neigh_v6_encap(struct nfp_app_fw_flower *app_fw_flower,
static int
nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
+ struct nfp_fl_rule_metadata *nfp_flow_meta,
struct rte_flow *nfp_flow)
{
+ bool exists = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
struct nfp_flow_priv *priv;
@@ -2527,11 +2548,17 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
LIST_FOREACH(tmp, &priv->nn_list, next) {
if (memcmp(&tmp->payload, &tun->payload, sizeof(struct nfp_fl_tun_entry)) == 0) {
tmp->ref_cnt++;
- return 0;
+ exists = true;
+ break;
}
}
- LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ if (exists) {
+ if (!nfp_flower_support_decap_v2(app_fw_flower))
+ return 0;
+ } else {
+ LIST_INSERT_HEAD(&priv->nn_list, tun, next);
+ }
memset(&payload, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
memcpy(payload.dst_ipv6, ipv6->ipv6_src, sizeof(payload.dst_ipv6));
@@ -2540,6 +2567,17 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower,
memcpy(payload.common.src_mac, eth->mac_dst, RTE_ETHER_ADDR_LEN);
payload.common.port_id = port->in_port;
+ if (nfp_flower_support_decap_v2(app_fw_flower)) {
+ if (meta_tci->tci != 0) {
+ payload.ext.vlan_tci = meta_tci->tci;
+ payload.ext.vlan_tpid = 0x88a8;
+ } else {
+ payload.ext.vlan_tci = 0xffff;
+ payload.ext.vlan_tpid = 0xffff;
+ }
+ payload.ext.host_ctx = nfp_flow_meta->host_ctx_id;
+ }
+
return nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &payload);
}
@@ -2557,12 +2595,14 @@ nfp_flower_del_tun_neigh_v6(struct nfp_app_fw_flower *app_fw_flower,
static int
nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
- struct rte_flow *nfp_flow)
+ struct rte_flow *nfp_flow,
+ bool decap_flag)
{
int ret;
bool flag = false;
struct nfp_fl_tun *tmp;
struct nfp_fl_tun *tun;
+ struct nfp_flower_in_port *port;
tun = &nfp_flow->tun;
LIST_FOREACH(tmp, &app_fw_flower->flow_priv->nn_list, next) {
@@ -2590,6 +2630,40 @@ nfp_flower_del_tun_neigh(struct nfp_app_fw_flower *app_fw_flower,
}
}
+ if (!decap_flag)
+ return 0;
+
+ port = (struct nfp_flower_in_port *)(nfp_flow->payload.unmasked_data +
+ sizeof(struct nfp_fl_rule_metadata) +
+ sizeof(struct nfp_flower_meta_tci));
+
+ if (tmp->payload.v6_flag != 0) {
+ struct nfp_flower_cmsg_tun_neigh_v6 nn_v6;
+ memset(&nn_v6, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v6));
+ memcpy(nn_v6.dst_ipv6, tmp->payload.dst.dst_ipv6, sizeof(nn_v6.dst_ipv6));
+ memcpy(nn_v6.src_ipv6, tmp->payload.src.src_ipv6, sizeof(nn_v6.src_ipv6));
+ memcpy(nn_v6.common.dst_mac, tmp->payload.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(nn_v6.common.src_mac, tmp->payload.src_addr, RTE_ETHER_ADDR_LEN);
+ nn_v6.common.port_id = port->in_port;
+
+ ret = nfp_flower_cmsg_tun_neigh_v6_rule(app_fw_flower, &nn_v6);
+ } else {
+ struct nfp_flower_cmsg_tun_neigh_v4 nn_v4;
+ memset(&nn_v4, 0, sizeof(struct nfp_flower_cmsg_tun_neigh_v4));
+ nn_v4.dst_ipv4 = tmp->payload.dst.dst_ipv4;
+ nn_v4.src_ipv4 = tmp->payload.src.src_ipv4;
+ memcpy(nn_v4.common.dst_mac, tmp->payload.dst_addr, RTE_ETHER_ADDR_LEN);
+ memcpy(nn_v4.common.src_mac, tmp->payload.src_addr, RTE_ETHER_ADDR_LEN);
+ nn_v4.common.port_id = port->in_port;
+
+ ret = nfp_flower_cmsg_tun_neigh_v4_rule(app_fw_flower, &nn_v4);
+ }
+
+ if (ret != 0) {
+ PMD_DRV_LOG(DEBUG, "Failed to send the nn entry");
+ return -EINVAL;
+ }
+
return 0;
}
@@ -2877,12 +2951,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
goto free_entry;
}
- ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
- nfp_mac_idx, true);
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
- ret = -EINVAL;
- goto free_entry;
+ if (!nfp_flower_support_decap_v2(repr->app_fw_flower)) {
+ ret = nfp_flower_cmsg_pre_tunnel_rule(repr->app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, true);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ ret = -EINVAL;
+ goto free_entry;
+ }
}
find_entry->ref_cnt = 1U;
@@ -2933,18 +3009,20 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr,
return -EINVAL;
}
- ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
- nfp_mac_idx, false);
- if (ret != 0) {
- PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
- return -EINVAL;
+ if (!nfp_flower_support_decap_v2(app_fw_flower)) {
+ ret = nfp_flower_cmsg_pre_tunnel_rule(app_fw_flower, nfp_flow_meta,
+ nfp_mac_idx, false);
+ if (ret != 0) {
+ PMD_DRV_LOG(ERR, "Send pre tunnel rule failed");
+ return -EINVAL;
+ }
}
meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;
if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4)
- return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow);
+ return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
else
- return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow);
+ return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow);
}
static int
@@ -3654,11 +3732,11 @@ nfp_flow_destroy(struct rte_eth_dev *dev,
break;
case NFP_FLOW_ENCAP:
/* Delete the entry from nn table */
- ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow, false);
break;
case NFP_FLOW_DECAP:
/* Delete the entry from nn table */
- ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow);
+ ret = nfp_flower_del_tun_neigh(app_fw_flower, nfp_flow, true);
if (ret != 0)
goto exit;
--
2.29.3
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH v2 25/25] net/nfp: support new tunnel solution
2022-10-25 1:44 ` Chaoyong He
@ 2022-10-25 8:18 ` Ferruh Yigit
0 siblings, 0 replies; 88+ messages in thread
From: Ferruh Yigit @ 2022-10-25 8:18 UTC (permalink / raw)
To: Chaoyong He; +Cc: oss-drivers, Niklas Soderlund, dev
On 10/25/2022 2:44 AM, Chaoyong He wrote:
>> On 10/22/2022 9:24 AM, Chaoyong He wrote:
>>> The new version of flower firmware application add the support of a
>>> new tunnel solution.
>>>
>>> It changes the structure of tunnel neighbor, and use a feature flag to
>>> indicate which tunnel solution is used.
>>>
>>> Add the logic of read extra features from firmware, and store it in
>>> the app private structure.
>>>
>>> Adjust the data structure and related logic to make the PMD support
>>> both version of tunnel solutions.
>>>
>>> Signed-off-by: Chaoyong He<chaoyong.he@corigine.com>
>>> Reviewed-by: Niklas Söderlund<niklas.soderlund@corigine.com>
>>> ---
>>> drivers/net/nfp/flower/nfp_flower.c | 14 ++++
>>> drivers/net/nfp/flower/nfp_flower.h | 24 +++++++
>>> drivers/net/nfp/flower/nfp_flower_cmsg.c | 4 ++
>>> drivers/net/nfp/flower/nfp_flower_cmsg.h | 17 +++++
>>> drivers/net/nfp/nfp_flow.c | 118 +++++++++++++++++++++++++-
>> -----
>>> 5 files changed, 157 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/drivers/net/nfp/flower/nfp_flower.c
>>> b/drivers/net/nfp/flower/nfp_flower.c
>>> index 41b0fe2..aa8199d 100644
>>> --- a/drivers/net/nfp/flower/nfp_flower.c
>>> +++ b/drivers/net/nfp/flower/nfp_flower.c
>>> @@ -1074,6 +1074,8 @@
>>> nfp_init_app_fw_flower(struct nfp_pf_dev *pf_dev)
>>> {
>>> int ret;
>>> + int err;
>>> + uint64_t ext_features;
>>> unsigned int numa_node;
>>> struct nfp_net_hw *pf_hw;
>>> struct nfp_net_hw *ctrl_hw;
>>> @@ -1115,6 +1117,18 @@
>>> goto vnic_cleanup;
>>> }
>>>
>>> + /* Read the extra features */
>>> + ext_features = nfp_rtsym_read_le(pf_dev->sym_tbl,
>> "_abi_flower_extra_features",
>>> + &err);
>>> + if (err != 0) {
>>> + PMD_INIT_LOG(ERR, "Couldn't read extra features from fw");
>>> + ret = -EIO;
>>> + goto pf_cpp_area_cleanup;
>>> + }
>>
>> Hi Chaoyong,
>>
>> It looks like there are two flavor of the flower firmware application, one with
>> 'extra_features' other without it.
>> Does this worth documenting in the driver documentation and the release
>> notes?
>
> Actually, it's just two different methods to process the tunnel decap action in the flower
> firmware application.
>
> The old version flower firmware application needs 'tunnel neighbor' and 'pre-tunnel' table
> to get needed information to decap the tunnel packet.
> While the new version flower firmware application extends the 'tunnel neighbor' table and
> does not need 'pre-tunnel' table anymore when decap the tunnel packet.
>
> The app which use the rte_flow know nothing about this difference.
> So, should we still explain this in the documentation and the release notes? I'm not quite sure
> about how details should we expose in these documents.
Thanks for clarification, if this is transparent to user/app may not
need to document.
^ permalink raw reply [flat|nested] 88+ messages in thread
* Re: [PATCH v3 00/26] add the extend rte_flow offload support of nfp PMD
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
` (25 preceding siblings ...)
2022-10-25 7:59 ` [PATCH v3 26/26] net/nfp: support new solution for tunnel decap action Chaoyong He
@ 2022-10-25 11:42 ` Ferruh Yigit
26 siblings, 0 replies; 88+ messages in thread
From: Ferruh Yigit @ 2022-10-25 11:42 UTC (permalink / raw)
To: Chaoyong He, dev; +Cc: oss-drivers, niklas.soderlund
On 10/25/2022 8:58 AM, Chaoyong He wrote:
> This is the third patch series to add the support of rte_flow offload for
> nfp PMD, includes:
> Add the offload support of decap/encap of VXLAN
> Add the offload support of decap/encap of GENEVE
> Add the offload support of decap/encap of NVGRE
>
> Changes since v2
> - Fix the inconsistency in 'nfp.ini' file.
> - Modify the commit message about the new solution of tunnel decap action.
> - Add a commit which fix the CPP bridge service stuck the DPDK app.
>
> Changes since v1
> - Delete the modificaiton of release note.
> - Modify the commit title.
> - Rebase to the lastest logic.
>
> Chaoyong He (26):
> net/nfp: fix the app stuck by CPP bridge service
> net/nfp: support IPv4 VXLAN flow item
> net/nfp: support IPv6 VXLAN flow item
> net/nfp: prepare for IPv4 tunnel encap flow action
> net/nfp: prepare for IPv6 tunnel encap flow action
> net/nfp: support IPv4 VXLAN encap flow action
> net/nfp: support IPv6 VXLAN encap flow action
> net/nfp: prepare for IPv4 UDP tunnel decap flow action
> net/nfp: prepare for IPv6 UDP tunnel decap flow action
> net/nfp: support IPv4 VXLAN decap flow action
> net/nfp: support IPv6 VXLAN decap flow action
> net/nfp: support IPv4 GENEVE encap flow action
> net/nfp: support IPv6 GENEVE encap flow action
> net/nfp: support IPv4 GENEVE flow item
> net/nfp: support IPv6 GENEVE flow item
> net/nfp: support IPv4 GENEVE decap flow action
> net/nfp: support IPv6 GENEVE decap flow action
> net/nfp: support IPv4 NVGRE encap flow action
> net/nfp: support IPv6 NVGRE encap flow action
> net/nfp: prepare for IPv4 GRE tunnel decap flow action
> net/nfp: prepare for IPv6 GRE tunnel decap flow action
> net/nfp: support IPv4 NVGRE flow item
> net/nfp: support IPv6 NVGRE flow item
> net/nfp: support IPv4 NVGRE decap flow action
> net/nfp: support IPv6 NVGRE decap flow action
> net/nfp: support new solution for tunnel decap action
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [flat|nested] 88+ messages in thread
end of thread, other threads:[~2022-10-25 11:42 UTC | newest]
Thread overview: 88+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-10-18 3:22 [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Chaoyong He
2022-10-18 3:22 ` [PATCH 01/25] net/nfp: add the offload support of IPv4 VXLAN item Chaoyong He
2022-10-18 3:22 ` [PATCH 02/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 03/25] net/nfp: prepare for the encap action of IPv4 tunnel Chaoyong He
2022-10-18 3:22 ` [PATCH 04/25] net/nfp: prepare for the encap action of IPv6 tunnel Chaoyong He
2022-10-18 3:22 ` [PATCH 05/25] net/nfp: add the offload support of IPv4 VXLAN encap action Chaoyong He
2022-10-18 3:22 ` [PATCH 06/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 07/25] net/nfp: prepare for the decap action of IPv4 UDP tunnel Chaoyong He
2022-10-18 3:22 ` [PATCH 08/25] net/nfp: prepare for the decap action of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 09/25] net/nfp: add the offload support of IPv4 VXLAN decap action Chaoyong He
2022-10-18 3:22 ` [PATCH 10/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 11/25] net/nfp: add the offload support of IPv4 GENEVE encap action Chaoyong He
2022-10-18 3:22 ` [PATCH 12/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 13/25] net/nfp: add the offload support of IPv4 GENEVE item Chaoyong He
2022-10-18 3:22 ` [PATCH 14/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 15/25] net/nfp: add the offload support of IPv4 GENEVE decap action Chaoyong He
2022-10-18 3:22 ` [PATCH 16/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 17/25] net/nfp: add the offload support of IPv4 NVGRE encap action Chaoyong He
2022-10-18 3:22 ` [PATCH 18/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 19/25] net/nfp: prepare for the decap action of IPv4 GRE tunnel Chaoyong He
2022-10-18 3:22 ` [PATCH 20/25] net/nfp: prepare for the decap action of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 21/25] net/nfp: add the offload support of IPv4 NVGRE item Chaoyong He
2022-10-18 3:22 ` [PATCH 22/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 23/25] net/nfp: add the offload support of IPv4 NVGRE decap action Chaoyong He
2022-10-18 3:22 ` [PATCH 24/25] net/nfp: add the offload support of IPv6 " Chaoyong He
2022-10-18 3:22 ` [PATCH 25/25] net/nfp: add the support of new tunnel solution Chaoyong He
2022-10-21 13:37 ` [PATCH 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
2022-10-21 13:39 ` Ferruh Yigit
2022-10-22 8:24 ` [PATCH v2 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 01/25] net/nfp: support IPv4 VXLAN flow item Chaoyong He
2022-10-22 8:24 ` [PATCH v2 02/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 03/25] net/nfp: prepare for IPv4 tunnel encap flow action Chaoyong He
2022-10-22 8:24 ` [PATCH v2 04/25] net/nfp: prepare for IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 05/25] net/nfp: support IPv4 VXLAN " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 06/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 07/25] net/nfp: prepare for IPv4 UDP tunnel decap " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 08/25] net/nfp: prepare for IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 09/25] net/nfp: support IPv4 VXLAN " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 10/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 11/25] net/nfp: support IPv4 GENEVE encap " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 12/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 13/25] net/nfp: support IPv4 GENEVE flow item Chaoyong He
2022-10-22 8:24 ` [PATCH v2 14/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 15/25] net/nfp: support IPv4 GENEVE decap flow action Chaoyong He
2022-10-22 8:24 ` [PATCH v2 16/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 17/25] net/nfp: support IPv4 NVGRE encap " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 18/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 19/25] net/nfp: prepare for IPv4 GRE tunnel decap " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 20/25] net/nfp: prepare for IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 21/25] net/nfp: support IPv4 NVGRE flow item Chaoyong He
2022-10-22 8:24 ` [PATCH v2 22/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 23/25] net/nfp: support IPv4 NVGRE decap flow action Chaoyong He
2022-10-22 8:24 ` [PATCH v2 24/25] net/nfp: support IPv6 " Chaoyong He
2022-10-22 8:24 ` [PATCH v2 25/25] net/nfp: support new tunnel solution Chaoyong He
2022-10-24 15:09 ` Ferruh Yigit
2022-10-25 1:44 ` Chaoyong He
2022-10-25 8:18 ` Ferruh Yigit
2022-10-24 15:07 ` [PATCH v2 00/25] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
2022-10-25 3:17 ` Chaoyong He
2022-10-25 3:29 ` Chaoyong He
2022-10-25 7:58 ` [PATCH v3 00/26] " Chaoyong He
2022-10-25 7:58 ` [PATCH v3 01/26] net/nfp: fix the app stuck by CPP bridge service Chaoyong He
2022-10-25 7:58 ` [PATCH v3 02/26] net/nfp: support IPv4 VXLAN flow item Chaoyong He
2022-10-25 7:58 ` [PATCH v3 03/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:58 ` [PATCH v3 04/26] net/nfp: prepare for IPv4 tunnel encap flow action Chaoyong He
2022-10-25 7:58 ` [PATCH v3 05/26] net/nfp: prepare for IPv6 " Chaoyong He
2022-10-25 7:58 ` [PATCH v3 06/26] net/nfp: support IPv4 VXLAN " Chaoyong He
2022-10-25 7:58 ` [PATCH v3 07/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 08/26] net/nfp: prepare for IPv4 UDP tunnel decap " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 09/26] net/nfp: prepare for IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 10/26] net/nfp: support IPv4 VXLAN " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 11/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 12/26] net/nfp: support IPv4 GENEVE encap " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 13/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 14/26] net/nfp: support IPv4 GENEVE flow item Chaoyong He
2022-10-25 7:59 ` [PATCH v3 15/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 16/26] net/nfp: support IPv4 GENEVE decap flow action Chaoyong He
2022-10-25 7:59 ` [PATCH v3 17/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 18/26] net/nfp: support IPv4 NVGRE encap " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 19/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 20/26] net/nfp: prepare for IPv4 GRE tunnel decap " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 21/26] net/nfp: prepare for IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 22/26] net/nfp: support IPv4 NVGRE flow item Chaoyong He
2022-10-25 7:59 ` [PATCH v3 23/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 24/26] net/nfp: support IPv4 NVGRE decap flow action Chaoyong He
2022-10-25 7:59 ` [PATCH v3 25/26] net/nfp: support IPv6 " Chaoyong He
2022-10-25 7:59 ` [PATCH v3 26/26] net/nfp: support new solution for tunnel decap action Chaoyong He
2022-10-25 11:42 ` [PATCH v3 00/26] add the extend rte_flow offload support of nfp PMD Ferruh Yigit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).