From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: <getelson@nvidia.com>, <matan@nvidia.com>, <rasland@nvidia.com>,
<elibr@nvidia.com>, <ozsh@nvidia.com>, <asafp@nvidia.com>,
Ori Kam <orika@nvidia.com>, Wenzhuo Lu <wenzhuo.lu@intel.com>,
Beilei Xing <beilei.xing@intel.com>,
Bernard Iremonger <bernard.iremonger@intel.com>
Subject: [dpdk-dev] [PATCH v8 3/3] app/testpmd: add commands for tunnel offload API
Date: Fri, 16 Oct 2020 15:51:07 +0300 [thread overview]
Message-ID: <20201016125108.22997-4-getelson@nvidia.com> (raw)
In-Reply-To: <20201016125108.22997-1-getelson@nvidia.com>
Tunnel Offload API provides hardware independent, unified model
to offload tunneled traffic. Key model elements are:
- apply matches to both outer and inner packet headers
during entire offload procedure;
- restore outer header of partially offloaded packet;
- model is implemented as a set of helper functions.
Implementation details:
* Create application tunnel:
flow tunnel create <port> type <tunnel type>
On success, the command creates application tunnel object and returns
the tunnel descriptor. Tunnel descriptor is used in subsequent flow
creation commands to reference the tunnel.
* Create tunnel steering flow rule:
tunnel_set <tunnel descriptor> parameter used with steering rule
template.
* Create tunnel matching flow rule:
tunnel_match <tunnel descriptor> used with matching rule template.
* If tunnel steering rule was offloaded, outer header of a partially
offloaded packet is restored after miss.
Example:
test packet=
<Ether dst=24:8a:07:8d:ae:d6 src=50:6b:4b:cc:fc:e2 type=IPv4 |
<IP version=4 ihl=5 proto=udp src=1.1.1.1 dst=1.1.1.10 |
<UDP sport=4789 dport=4789 len=58 chksum=0x7f7b |
<VXLAN NextProtocol=Ethernet vni=0x0 |
<Ether dst=24:aa:aa:aa:aa:d6 src=50:bb:bb:bb:bb:e2 type=IPv4 |
<IP version=4 ihl=5 proto=icmp src=2.2.2.2 dst=2.2.2.200 |
<ICMP type=echo-request code=0 chksum=0xf7ff id=0x0 seq=0x0 |>>>>>>>
>>> len(packet)
92
testpmd> flow flush 0
testpmd> port 0/queue 0: received 1 packets
src=50:6B:4B:CC:FC:E2 - dst=24:8A:07:8D:AE:D6 - type=0x0800 -
length=92
testpmd> flow tunnel 0 type vxlan
port 0: flow tunnel #1 type vxlan
testpmd> flow create 0 ingress group 0 tunnel_set 1
pattern eth /ipv4 / udp dst is 4789 / vxlan / end
actions jump group 0 / end
Flow rule #0 created
testpmd> port 0/queue 0: received 1 packets
tunnel restore info: - vxlan tunnel - outer header present # <--
src=50:6B:4B:CC:FC:E2 - dst=24:8A:07:8D:AE:D6 - type=0x0800 -
length=92
testpmd> flow create 0 ingress group 0 tunnel_match 1
pattern eth / ipv4 / udp dst is 4789 / vxlan / eth / ipv4 /
end
actions set_mac_dst mac_addr 02:CA:FE:CA:FA:80 /
queue index 0 / end
Flow rule #1 created
testpmd> port 0/queue 0: received 1 packets
src=50:BB:BB:BB:BB:E2 - dst=02:CA:FE:CA:FA:80 - type=0x0800 -
length=42
* Destroy flow tunnel
flow tunnel destroy <port> id <tunnel id>
* Show existing flow tunnels
flow tunnel list <port>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
---
v2:
* introduce testpmd support for tunnel offload API
v3:
* update flow tunnel commands
v5:
* rebase to next-net
v7:
* resolve "%lu" differences in ubuntu 32 & 64
v8:
* use PRIu64 for general cast to uint64_t
---
app/test-pmd/cmdline_flow.c | 170 ++++++++++++-
app/test-pmd/config.c | 252 +++++++++++++++++++-
app/test-pmd/testpmd.c | 5 +-
app/test-pmd/testpmd.h | 34 ++-
app/test-pmd/util.c | 35 ++-
doc/guides/testpmd_app_ug/testpmd_funcs.rst | 49 ++++
6 files changed, 532 insertions(+), 13 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 00c70a144a..b9a1f7178a 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -74,6 +74,14 @@ enum index {
LIST,
AGED,
ISOLATE,
+ TUNNEL,
+
+ /* Tunnel arguments. */
+ TUNNEL_CREATE,
+ TUNNEL_CREATE_TYPE,
+ TUNNEL_LIST,
+ TUNNEL_DESTROY,
+ TUNNEL_DESTROY_ID,
/* Destroy arguments. */
DESTROY_RULE,
@@ -93,6 +101,8 @@ enum index {
INGRESS,
EGRESS,
TRANSFER,
+ TUNNEL_SET,
+ TUNNEL_MATCH,
/* Shared action arguments */
SHARED_ACTION_CREATE,
@@ -713,6 +723,7 @@ struct buffer {
} sa; /* Shared action query arguments */
struct {
struct rte_flow_attr attr;
+ struct tunnel_ops tunnel_ops;
struct rte_flow_item *pattern;
struct rte_flow_action *actions;
uint32_t pattern_n;
@@ -789,10 +800,32 @@ static const enum index next_vc_attr[] = {
INGRESS,
EGRESS,
TRANSFER,
+ TUNNEL_SET,
+ TUNNEL_MATCH,
PATTERN,
ZERO,
};
+static const enum index tunnel_create_attr[] = {
+ TUNNEL_CREATE,
+ TUNNEL_CREATE_TYPE,
+ END,
+ ZERO,
+};
+
+static const enum index tunnel_destroy_attr[] = {
+ TUNNEL_DESTROY,
+ TUNNEL_DESTROY_ID,
+ END,
+ ZERO,
+};
+
+static const enum index tunnel_list_attr[] = {
+ TUNNEL_LIST,
+ END,
+ ZERO,
+};
+
static const enum index next_destroy_attr[] = {
DESTROY_RULE,
END,
@@ -1643,6 +1676,9 @@ static int parse_aged(struct context *, const struct token *,
static int parse_isolate(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
+static int parse_tunnel(struct context *, const struct token *,
+ const char *, unsigned int,
+ void *, unsigned int);
static int parse_int(struct context *, const struct token *,
const char *, unsigned int,
void *, unsigned int);
@@ -1844,7 +1880,8 @@ static const struct token token_list[] = {
LIST,
AGED,
QUERY,
- ISOLATE)),
+ ISOLATE,
+ TUNNEL)),
.call = parse_init,
},
/* Top-level command. */
@@ -1955,6 +1992,49 @@ static const struct token token_list[] = {
ARGS_ENTRY(struct buffer, port)),
.call = parse_isolate,
},
+ [TUNNEL] = {
+ .name = "tunnel",
+ .help = "new tunnel API",
+ .next = NEXT(NEXT_ENTRY
+ (TUNNEL_CREATE, TUNNEL_LIST, TUNNEL_DESTROY)),
+ .call = parse_tunnel,
+ },
+ /* Tunnel arguments. */
+ [TUNNEL_CREATE] = {
+ .name = "create",
+ .help = "create new tunnel object",
+ .next = NEXT(tunnel_create_attr, NEXT_ENTRY(PORT_ID)),
+ .args = ARGS(ARGS_ENTRY(struct buffer, port)),
+ .call = parse_tunnel,
+ },
+ [TUNNEL_CREATE_TYPE] = {
+ .name = "type",
+ .help = "create new tunnel",
+ .next = NEXT(tunnel_create_attr, NEXT_ENTRY(FILE_PATH)),
+ .args = ARGS(ARGS_ENTRY(struct tunnel_ops, type)),
+ .call = parse_tunnel,
+ },
+ [TUNNEL_DESTROY] = {
+ .name = "destroy",
+ .help = "destroy tunel",
+ .next = NEXT(tunnel_destroy_attr, NEXT_ENTRY(PORT_ID)),
+ .args = ARGS(ARGS_ENTRY(struct buffer, port)),
+ .call = parse_tunnel,
+ },
+ [TUNNEL_DESTROY_ID] = {
+ .name = "id",
+ .help = "tunnel identifier to testroy",
+ .next = NEXT(tunnel_destroy_attr, NEXT_ENTRY(UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct tunnel_ops, id)),
+ .call = parse_tunnel,
+ },
+ [TUNNEL_LIST] = {
+ .name = "list",
+ .help = "list existing tunnels",
+ .next = NEXT(tunnel_list_attr, NEXT_ENTRY(PORT_ID)),
+ .args = ARGS(ARGS_ENTRY(struct buffer, port)),
+ .call = parse_tunnel,
+ },
/* Destroy arguments. */
[DESTROY_RULE] = {
.name = "rule",
@@ -2018,6 +2098,20 @@ static const struct token token_list[] = {
.next = NEXT(next_vc_attr),
.call = parse_vc,
},
+ [TUNNEL_SET] = {
+ .name = "tunnel_set",
+ .help = "tunnel steer rule",
+ .next = NEXT(next_vc_attr, NEXT_ENTRY(UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct tunnel_ops, id)),
+ .call = parse_vc,
+ },
+ [TUNNEL_MATCH] = {
+ .name = "tunnel_match",
+ .help = "tunnel match rule",
+ .next = NEXT(next_vc_attr, NEXT_ENTRY(UNSIGNED)),
+ .args = ARGS(ARGS_ENTRY(struct tunnel_ops, id)),
+ .call = parse_vc,
+ },
/* Validate/create pattern. */
[PATTERN] = {
.name = "pattern",
@@ -4495,12 +4589,28 @@ parse_vc(struct context *ctx, const struct token *token,
return len;
}
ctx->objdata = 0;
- ctx->object = &out->args.vc.attr;
+ switch (ctx->curr) {
+ default:
+ ctx->object = &out->args.vc.attr;
+ break;
+ case TUNNEL_SET:
+ case TUNNEL_MATCH:
+ ctx->object = &out->args.vc.tunnel_ops;
+ break;
+ }
ctx->objmask = NULL;
switch (ctx->curr) {
case GROUP:
case PRIORITY:
return len;
+ case TUNNEL_SET:
+ out->args.vc.tunnel_ops.enabled = 1;
+ out->args.vc.tunnel_ops.actions = 1;
+ return len;
+ case TUNNEL_MATCH:
+ out->args.vc.tunnel_ops.enabled = 1;
+ out->args.vc.tunnel_ops.items = 1;
+ return len;
case INGRESS:
out->args.vc.attr.ingress = 1;
return len;
@@ -6108,6 +6218,47 @@ parse_isolate(struct context *ctx, const struct token *token,
return len;
}
+static int
+parse_tunnel(struct context *ctx, const struct token *token,
+ const char *str, unsigned int len,
+ void *buf, unsigned int size)
+{
+ struct buffer *out = buf;
+
+ /* Token name must match. */
+ if (parse_default(ctx, token, str, len, NULL, 0) < 0)
+ return -1;
+ /* Nothing else to do if there is no buffer. */
+ if (!out)
+ return len;
+ if (!out->command) {
+ if (ctx->curr != TUNNEL)
+ return -1;
+ if (sizeof(*out) > size)
+ return -1;
+ out->command = ctx->curr;
+ ctx->objdata = 0;
+ ctx->object = out;
+ ctx->objmask = NULL;
+ } else {
+ switch (ctx->curr) {
+ default:
+ break;
+ case TUNNEL_CREATE:
+ case TUNNEL_DESTROY:
+ case TUNNEL_LIST:
+ out->command = ctx->curr;
+ break;
+ case TUNNEL_CREATE_TYPE:
+ case TUNNEL_DESTROY_ID:
+ ctx->object = &out->args.vc.tunnel_ops;
+ break;
+ }
+ }
+
+ return len;
+}
+
/**
* Parse signed/unsigned integers 8 to 64-bit long.
*
@@ -7148,11 +7299,13 @@ cmd_flow_parsed(const struct buffer *in)
break;
case VALIDATE:
port_flow_validate(in->port, &in->args.vc.attr,
- in->args.vc.pattern, in->args.vc.actions);
+ in->args.vc.pattern, in->args.vc.actions,
+ &in->args.vc.tunnel_ops);
break;
case CREATE:
port_flow_create(in->port, &in->args.vc.attr,
- in->args.vc.pattern, in->args.vc.actions);
+ in->args.vc.pattern, in->args.vc.actions,
+ &in->args.vc.tunnel_ops);
break;
case DESTROY:
port_flow_destroy(in->port, in->args.destroy.rule_n,
@@ -7178,6 +7331,15 @@ cmd_flow_parsed(const struct buffer *in)
case AGED:
port_flow_aged(in->port, in->args.aged.destroy);
break;
+ case TUNNEL_CREATE:
+ port_flow_tunnel_create(in->port, &in->args.vc.tunnel_ops);
+ break;
+ case TUNNEL_DESTROY:
+ port_flow_tunnel_destroy(in->port, in->args.vc.tunnel_ops.id);
+ break;
+ case TUNNEL_LIST:
+ port_flow_tunnel_list(in->port);
+ break;
default:
break;
}
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 2c00b55440..c9505e5661 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -1521,6 +1521,115 @@ port_mtu_set(portid_t port_id, uint16_t mtu)
/* Generic flow management functions. */
+static struct port_flow_tunnel *
+port_flow_locate_tunnel_id(struct rte_port *port, uint32_t port_tunnel_id)
+{
+ struct port_flow_tunnel *flow_tunnel;
+
+ LIST_FOREACH(flow_tunnel, &port->flow_tunnel_list, chain) {
+ if (flow_tunnel->id == port_tunnel_id)
+ goto out;
+ }
+ flow_tunnel = NULL;
+
+out:
+ return flow_tunnel;
+}
+
+const char *
+port_flow_tunnel_type(struct rte_flow_tunnel *tunnel)
+{
+ const char *type;
+ switch (tunnel->type) {
+ default:
+ type = "unknown";
+ break;
+ case RTE_FLOW_ITEM_TYPE_VXLAN:
+ type = "vxlan";
+ break;
+ }
+
+ return type;
+}
+
+struct port_flow_tunnel *
+port_flow_locate_tunnel(uint16_t port_id, struct rte_flow_tunnel *tun)
+{
+ struct rte_port *port = &ports[port_id];
+ struct port_flow_tunnel *flow_tunnel;
+
+ LIST_FOREACH(flow_tunnel, &port->flow_tunnel_list, chain) {
+ if (!memcmp(&flow_tunnel->tunnel, tun, sizeof(*tun)))
+ goto out;
+ }
+ flow_tunnel = NULL;
+
+out:
+ return flow_tunnel;
+}
+
+void port_flow_tunnel_list(portid_t port_id)
+{
+ struct rte_port *port = &ports[port_id];
+ struct port_flow_tunnel *flt;
+
+ LIST_FOREACH(flt, &port->flow_tunnel_list, chain) {
+ printf("port %u tunnel #%u type=%s",
+ port_id, flt->id, port_flow_tunnel_type(&flt->tunnel));
+ if (flt->tunnel.tun_id)
+ printf(" id=%" PRIu64, flt->tunnel.tun_id);
+ printf("\n");
+ }
+}
+
+void port_flow_tunnel_destroy(portid_t port_id, uint32_t tunnel_id)
+{
+ struct rte_port *port = &ports[port_id];
+ struct port_flow_tunnel *flt;
+
+ LIST_FOREACH(flt, &port->flow_tunnel_list, chain) {
+ if (flt->id == tunnel_id)
+ break;
+ }
+ if (flt) {
+ LIST_REMOVE(flt, chain);
+ free(flt);
+ printf("port %u: flow tunnel #%u destroyed\n",
+ port_id, tunnel_id);
+ }
+}
+
+void port_flow_tunnel_create(portid_t port_id, const struct tunnel_ops *ops)
+{
+ struct rte_port *port = &ports[port_id];
+ enum rte_flow_item_type type;
+ struct port_flow_tunnel *flt;
+
+ if (!strcmp(ops->type, "vxlan"))
+ type = RTE_FLOW_ITEM_TYPE_VXLAN;
+ else {
+ printf("cannot offload \"%s\" tunnel type\n", ops->type);
+ return;
+ }
+ LIST_FOREACH(flt, &port->flow_tunnel_list, chain) {
+ if (flt->tunnel.type == type)
+ break;
+ }
+ if (!flt) {
+ flt = calloc(1, sizeof(*flt));
+ if (!flt) {
+ printf("failed to allocate port flt object\n");
+ return;
+ }
+ flt->tunnel.type = type;
+ flt->id = LIST_EMPTY(&port->flow_tunnel_list) ? 1 :
+ LIST_FIRST(&port->flow_tunnel_list)->id + 1;
+ LIST_INSERT_HEAD(&port->flow_tunnel_list, flt, chain);
+ }
+ printf("port %d: flow tunnel #%u type %s\n",
+ port_id, flt->id, ops->type);
+}
+
/** Generate a port_flow entry from attributes/pattern/actions. */
static struct port_flow *
port_flow_new(const struct rte_flow_attr *attr,
@@ -1860,20 +1969,137 @@ port_shared_action_query(portid_t port_id, uint32_t id)
}
return ret;
}
+static struct port_flow_tunnel *
+port_flow_tunnel_offload_cmd_prep(portid_t port_id,
+ const struct rte_flow_item *pattern,
+ const struct rte_flow_action *actions,
+ const struct tunnel_ops *tunnel_ops)
+{
+ int ret;
+ struct rte_port *port;
+ struct port_flow_tunnel *pft;
+ struct rte_flow_error error;
+
+ port = &ports[port_id];
+ pft = port_flow_locate_tunnel_id(port, tunnel_ops->id);
+ if (!pft) {
+ printf("failed to locate port flow tunnel #%u\n",
+ tunnel_ops->id);
+ return NULL;
+ }
+ if (tunnel_ops->actions) {
+ uint32_t num_actions;
+ const struct rte_flow_action *aptr;
+
+ ret = rte_flow_tunnel_decap_set(port_id, &pft->tunnel,
+ &pft->pmd_actions,
+ &pft->num_pmd_actions,
+ &error);
+ if (ret) {
+ port_flow_complain(&error);
+ return NULL;
+ }
+ for (aptr = actions, num_actions = 1;
+ aptr->type != RTE_FLOW_ACTION_TYPE_END;
+ aptr++, num_actions++);
+ pft->actions = malloc(
+ (num_actions + pft->num_pmd_actions) *
+ sizeof(actions[0]));
+ if (!pft->actions) {
+ rte_flow_tunnel_action_decap_release(
+ port_id, pft->actions,
+ pft->num_pmd_actions, &error);
+ return NULL;
+ }
+ rte_memcpy(pft->actions, pft->pmd_actions,
+ pft->num_pmd_actions * sizeof(actions[0]));
+ rte_memcpy(pft->actions + pft->num_pmd_actions, actions,
+ num_actions * sizeof(actions[0]));
+ }
+ if (tunnel_ops->items) {
+ uint32_t num_items;
+ const struct rte_flow_item *iptr;
+
+ ret = rte_flow_tunnel_match(port_id, &pft->tunnel,
+ &pft->pmd_items,
+ &pft->num_pmd_items,
+ &error);
+ if (ret) {
+ port_flow_complain(&error);
+ return NULL;
+ }
+ for (iptr = pattern, num_items = 1;
+ iptr->type != RTE_FLOW_ITEM_TYPE_END;
+ iptr++, num_items++);
+ pft->items = malloc((num_items + pft->num_pmd_items) *
+ sizeof(pattern[0]));
+ if (!pft->items) {
+ rte_flow_tunnel_item_release(
+ port_id, pft->pmd_items,
+ pft->num_pmd_items, &error);
+ return NULL;
+ }
+ rte_memcpy(pft->items, pft->pmd_items,
+ pft->num_pmd_items * sizeof(pattern[0]));
+ rte_memcpy(pft->items + pft->num_pmd_items, pattern,
+ num_items * sizeof(pattern[0]));
+ }
+
+ return pft;
+}
+
+static void
+port_flow_tunnel_offload_cmd_release(portid_t port_id,
+ const struct tunnel_ops *tunnel_ops,
+ struct port_flow_tunnel *pft)
+{
+ struct rte_flow_error error;
+
+ if (tunnel_ops->actions) {
+ free(pft->actions);
+ rte_flow_tunnel_action_decap_release(
+ port_id, pft->pmd_actions,
+ pft->num_pmd_actions, &error);
+ pft->actions = NULL;
+ pft->pmd_actions = NULL;
+ }
+ if (tunnel_ops->items) {
+ free(pft->items);
+ rte_flow_tunnel_item_release(port_id, pft->pmd_items,
+ pft->num_pmd_items,
+ &error);
+ pft->items = NULL;
+ pft->pmd_items = NULL;
+ }
+}
/** Validate flow rule. */
int
port_flow_validate(portid_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
- const struct rte_flow_action *actions)
+ const struct rte_flow_action *actions,
+ const struct tunnel_ops *tunnel_ops)
{
struct rte_flow_error error;
+ struct port_flow_tunnel *pft = NULL;
/* Poisoning to make sure PMDs update it in case of error. */
memset(&error, 0x11, sizeof(error));
+ if (tunnel_ops->enabled) {
+ pft = port_flow_tunnel_offload_cmd_prep(port_id, pattern,
+ actions, tunnel_ops);
+ if (!pft)
+ return -ENOENT;
+ if (pft->items)
+ pattern = pft->items;
+ if (pft->actions)
+ actions = pft->actions;
+ }
if (rte_flow_validate(port_id, attr, pattern, actions, &error))
return port_flow_complain(&error);
+ if (tunnel_ops->enabled)
+ port_flow_tunnel_offload_cmd_release(port_id, tunnel_ops, pft);
printf("Flow rule validated\n");
return 0;
}
@@ -1903,13 +2129,15 @@ int
port_flow_create(portid_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
- const struct rte_flow_action *actions)
+ const struct rte_flow_action *actions,
+ const struct tunnel_ops *tunnel_ops)
{
struct rte_flow *flow;
struct rte_port *port;
struct port_flow *pf;
uint32_t id = 0;
struct rte_flow_error error;
+ struct port_flow_tunnel *pft = NULL;
port = &ports[port_id];
if (port->flow_list) {
@@ -1920,6 +2148,16 @@ port_flow_create(portid_t port_id,
}
id = port->flow_list->id + 1;
}
+ if (tunnel_ops->enabled) {
+ pft = port_flow_tunnel_offload_cmd_prep(port_id, pattern,
+ actions, tunnel_ops);
+ if (!pft)
+ return -ENOENT;
+ if (pft->items)
+ pattern = pft->items;
+ if (pft->actions)
+ actions = pft->actions;
+ }
pf = port_flow_new(attr, pattern, actions, &error);
if (!pf)
return port_flow_complain(&error);
@@ -1935,6 +2173,8 @@ port_flow_create(portid_t port_id,
pf->id = id;
pf->flow = flow;
port->flow_list = pf;
+ if (tunnel_ops->enabled)
+ port_flow_tunnel_offload_cmd_release(port_id, tunnel_ops, pft);
printf("Flow rule #%u created\n", pf->id);
return 0;
}
@@ -2244,7 +2484,9 @@ port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group)
pf->rule.attr->egress ? 'e' : '-',
pf->rule.attr->transfer ? 't' : '-');
while (item->type != RTE_FLOW_ITEM_TYPE_END) {
- if (rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_NAME_PTR,
+ if ((uint32_t)item->type > INT_MAX)
+ name = "PMD_INTERNAL";
+ else if (rte_flow_conv(RTE_FLOW_CONV_OP_ITEM_NAME_PTR,
&name, sizeof(name),
(void *)(uintptr_t)item->type,
NULL) <= 0)
@@ -2255,7 +2497,9 @@ port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group)
}
printf("=>");
while (action->type != RTE_FLOW_ACTION_TYPE_END) {
- if (rte_flow_conv(RTE_FLOW_CONV_OP_ACTION_NAME_PTR,
+ if ((uint32_t)action->type > INT_MAX)
+ name = "PMD_INTERNAL";
+ else if (rte_flow_conv(RTE_FLOW_CONV_OP_ACTION_NAME_PTR,
&name, sizeof(name),
(void *)(uintptr_t)action->type,
NULL) <= 0)
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 6caba60988..333904d686 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -3684,6 +3684,8 @@ init_port_dcb_config(portid_t pid,
static void
init_port(void)
{
+ int i;
+
/* Configuration of Ethernet ports. */
ports = rte_zmalloc("testpmd: ports",
sizeof(struct rte_port) * RTE_MAX_ETHPORTS,
@@ -3693,7 +3695,8 @@ init_port(void)
"rte_zmalloc(%d struct rte_port) failed\n",
RTE_MAX_ETHPORTS);
}
-
+ for (i = 0; i < RTE_MAX_ETHPORTS; i++)
+ LIST_INIT(&ports[i].flow_tunnel_list);
/* Initialize ports NUMA structures */
memset(port_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
memset(rxring_numa, NUMA_NO_CONFIG, RTE_MAX_ETHPORTS);
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index f8b0a3517d..5238ac3dd5 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -12,6 +12,7 @@
#include <rte_gro.h>
#include <rte_gso.h>
#include <cmdline.h>
+#include <sys/queue.h>
#define RTE_PORT_ALL (~(portid_t)0x0)
@@ -150,6 +151,26 @@ struct port_shared_action {
struct rte_flow_shared_action *action; /**< Shared action handle. */
};
+struct port_flow_tunnel {
+ LIST_ENTRY(port_flow_tunnel) chain;
+ struct rte_flow_action *pmd_actions;
+ struct rte_flow_item *pmd_items;
+ uint32_t id;
+ uint32_t num_pmd_actions;
+ uint32_t num_pmd_items;
+ struct rte_flow_tunnel tunnel;
+ struct rte_flow_action *actions;
+ struct rte_flow_item *items;
+};
+
+struct tunnel_ops {
+ uint32_t id;
+ char type[16];
+ uint32_t enabled:1;
+ uint32_t actions:1;
+ uint32_t items:1;
+};
+
/**
* The data structure associated with each port.
*/
@@ -182,6 +203,7 @@ struct rte_port {
struct port_flow *flow_list; /**< Associated flows. */
struct port_shared_action *actions_list;
/**< Associated shared actions. */
+ LIST_HEAD(, port_flow_tunnel) flow_tunnel_list;
const struct rte_eth_rxtx_callback *rx_dump_cb[RTE_MAX_QUEUES_PER_PORT+1];
const struct rte_eth_rxtx_callback *tx_dump_cb[RTE_MAX_QUEUES_PER_PORT+1];
/**< metadata value to insert in Tx packets. */
@@ -773,11 +795,13 @@ int port_shared_action_update(portid_t port_id, uint32_t id,
int port_flow_validate(portid_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
- const struct rte_flow_action *actions);
+ const struct rte_flow_action *actions,
+ const struct tunnel_ops *tunnel_ops);
int port_flow_create(portid_t port_id,
const struct rte_flow_attr *attr,
const struct rte_flow_item *pattern,
- const struct rte_flow_action *actions);
+ const struct rte_flow_action *actions,
+ const struct tunnel_ops *tunnel_ops);
int port_shared_action_query(portid_t port_id, uint32_t id);
void update_age_action_context(const struct rte_flow_action *actions,
struct port_flow *pf);
@@ -788,6 +812,12 @@ int port_flow_query(portid_t port_id, uint32_t rule,
const struct rte_flow_action *action);
void port_flow_list(portid_t port_id, uint32_t n, const uint32_t *group);
void port_flow_aged(portid_t port_id, uint8_t destroy);
+const char *port_flow_tunnel_type(struct rte_flow_tunnel *tunnel);
+struct port_flow_tunnel *
+port_flow_locate_tunnel(uint16_t port_id, struct rte_flow_tunnel *tun);
+void port_flow_tunnel_list(portid_t port_id);
+void port_flow_tunnel_destroy(portid_t port_id, uint32_t tunnel_id);
+void port_flow_tunnel_create(portid_t port_id, const struct tunnel_ops *ops);
int port_flow_isolate(portid_t port_id, int set);
void rx_ring_desc_display(portid_t port_id, queueid_t rxq_id, uint16_t rxd_id);
diff --git a/app/test-pmd/util.c b/app/test-pmd/util.c
index 8488fa1a8f..781a813759 100644
--- a/app/test-pmd/util.c
+++ b/app/test-pmd/util.c
@@ -48,18 +48,49 @@ dump_pkt_burst(uint16_t port_id, uint16_t queue, struct rte_mbuf *pkts[],
is_rx ? "received" : "sent",
(unsigned int) nb_pkts);
for (i = 0; i < nb_pkts; i++) {
+ int ret;
+ struct rte_flow_error error;
+ struct rte_flow_restore_info info = { 0, };
+
mb = pkts[i];
eth_hdr = rte_pktmbuf_read(mb, 0, sizeof(_eth_hdr), &_eth_hdr);
eth_type = RTE_BE_TO_CPU_16(eth_hdr->ether_type);
- ol_flags = mb->ol_flags;
packet_type = mb->packet_type;
is_encapsulation = RTE_ETH_IS_TUNNEL_PKT(packet_type);
-
+ ret = rte_flow_get_restore_info(port_id, mb, &info, &error);
+ if (!ret) {
+ printf("restore info:");
+ if (info.flags & RTE_FLOW_RESTORE_INFO_TUNNEL) {
+ struct port_flow_tunnel *port_tunnel;
+
+ port_tunnel = port_flow_locate_tunnel
+ (port_id, &info.tunnel);
+ printf(" - tunnel");
+ if (port_tunnel)
+ printf(" #%u", port_tunnel->id);
+ else
+ printf(" %s", "-none-");
+ printf(" type %s",
+ port_flow_tunnel_type(&info.tunnel));
+ } else {
+ printf(" - no tunnel info");
+ }
+ if (info.flags & RTE_FLOW_RESTORE_INFO_ENCAPSULATED)
+ printf(" - outer header present");
+ else
+ printf(" - no outer header");
+ if (info.flags & RTE_FLOW_RESTORE_INFO_GROUP_ID)
+ printf(" - miss group %u", info.group_id);
+ else
+ printf(" - no miss group");
+ printf("\n");
+ }
print_ether_addr(" src=", ð_hdr->s_addr);
print_ether_addr(" - dst=", ð_hdr->d_addr);
printf(" - type=0x%04x - length=%u - nb_segs=%d",
eth_type, (unsigned int) mb->pkt_len,
(int)mb->nb_segs);
+ ol_flags = mb->ol_flags;
if (ol_flags & PKT_RX_RSS_HASH) {
printf(" - RSS hash=0x%x", (unsigned int) mb->hash.rss);
printf(" - RSS queue=0x%x", (unsigned int) queue);
diff --git a/doc/guides/testpmd_app_ug/testpmd_funcs.rst b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
index 43c0ea0599..05a4446757 100644
--- a/doc/guides/testpmd_app_ug/testpmd_funcs.rst
+++ b/doc/guides/testpmd_app_ug/testpmd_funcs.rst
@@ -3749,6 +3749,45 @@ following sections.
flow aged {port_id} [destroy]
+- Tunnel offload - create a tunnel stub::
+
+ flow tunnel create {port_id} type {tunnel_type}
+
+- Tunnel offload - destroy a tunnel stub::
+
+ flow tunnel destroy {port_id} id {tunnel_id}
+
+- Tunnel offload - list port tunnel stubs::
+
+ flow tunnel list {port_id}
+
+Creating a tunnel stub for offload
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow tunnel create`` setup a tunnel stub for tunnel offload flow rules::
+
+ flow tunnel create {port_id} type {tunnel_type}
+
+If successful, it will return a tunnel stub ID usable with other commands::
+
+ port [...]: flow tunnel #[...] type [...]
+
+Tunnel stub ID is relative to a port.
+
+Destroying tunnel offload stub
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow tunnel destroy`` destroy port tunnel stub::
+
+ flow tunnel destroy {port_id} id {tunnel_id}
+
+Listing tunnel offload stubs
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+``flow tunnel list`` list port tunnel offload stubs::
+
+ flow tunnel list {port_id}
+
Validating flow rules
~~~~~~~~~~~~~~~~~~~~~
@@ -3795,6 +3834,7 @@ to ``rte_flow_create()``::
flow create {port_id}
[group {group_id}] [priority {level}] [ingress] [egress] [transfer]
+ [tunnel_set {tunnel_id}] [tunnel_match {tunnel_id}]
pattern {item} [/ {item} [...]] / end
actions {action} [/ {action} [...]] / end
@@ -3809,6 +3849,7 @@ Otherwise it will show an error message of the form::
Parameters describe in the following order:
- Attributes (*group*, *priority*, *ingress*, *egress*, *transfer* tokens).
+- Tunnel offload specification (tunnel_set, tunnel_match)
- A matching pattern, starting with the *pattern* token and terminated by an
*end* pattern item.
- Actions, starting with the *actions* token and terminated by an *end*
@@ -3852,6 +3893,14 @@ Most rules affect RX therefore contain the ``ingress`` token::
testpmd> flow create 0 ingress pattern [...]
+Tunnel offload
+^^^^^^^^^^^^^^
+
+Indicate tunnel offload rule type
+
+- ``tunnel_set {tunnel_id}``: mark rule as tunnel offload decap_set type.
+- ``tunnel_match {tunnel_id}``: mark rule as tunel offload match type.
+
Matching pattern
^^^^^^^^^^^^^^^^
--
2.28.0
next prev parent reply other threads:[~2020-10-16 12:52 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-25 16:03 [dpdk-dev] [PATCH 0/2] ethdev: tunnel offload model Gregory Etelson
2020-06-25 16:03 ` [dpdk-dev] [PATCH 1/2] ethdev: allow negative values in flow rule types Gregory Etelson
2020-07-05 13:34 ` Andrew Rybchenko
2020-08-19 14:33 ` Gregory Etelson
2020-06-25 16:03 ` [dpdk-dev] [PATCH 2/2] ethdev: tunnel offload model Gregory Etelson
[not found] ` <DB8PR05MB6761ED02BCD188771BDCDE64A86F0@DB8PR05MB6761.eurprd05.prod.outlook.com>
[not found] ` <38d3513f-1261-0fbc-7c56-f83ced61f97a@ashroe.eu>
2020-07-01 6:52 ` Gregory Etelson
2020-07-13 8:21 ` Thomas Monjalon
2020-07-13 13:23 ` Gregory Etelson
2020-07-05 14:50 ` Andrew Rybchenko
2020-08-19 14:30 ` Gregory Etelson
2020-07-05 13:39 ` [dpdk-dev] [PATCH 0/2] " Andrew Rybchenko
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 0/4] Tunnel Offload API Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-09-15 4:36 ` Ajit Khaparde
2020-09-15 8:46 ` Andrew Rybchenko
2020-09-15 10:27 ` Gregory Etelson
2020-09-16 17:21 ` Gregory Etelson
2020-09-17 6:49 ` Andrew Rybchenko
2020-09-17 7:47 ` Ori Kam
2020-09-17 15:15 ` Andrew Rybchenko
2020-09-17 7:56 ` Gregory Etelson
2020-09-17 15:18 ` Andrew Rybchenko
2020-09-15 8:45 ` Andrew Rybchenko
2020-09-15 16:17 ` Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 2/4] ethdev: tunnel offload model Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 4/4] app/testpmd: support " Gregory Etelson
2020-09-15 4:47 ` Ajit Khaparde
2020-09-15 10:44 ` Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 0/4] Tunnel Offload API Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-04 5:40 ` Ajit Khaparde
2020-10-04 9:24 ` Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 2/4] ethdev: tunnel offload model Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 4/4] app/testpmd: add commands for " Gregory Etelson
2020-10-01 5:32 ` Ajit Khaparde
2020-10-01 9:05 ` Gregory Etelson
2020-10-04 5:40 ` Ajit Khaparde
2020-10-04 9:29 ` Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 0/4] Tunnel Offload API Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-14 23:40 ` Thomas Monjalon
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 2/4] ethdev: tunnel offload model Gregory Etelson
2020-10-06 9:47 ` Sriharsha Basavapatna
2020-10-07 12:36 ` Gregory Etelson
2020-10-14 17:23 ` Ferruh Yigit
2020-10-16 9:15 ` Gregory Etelson
2020-10-14 23:55 ` Thomas Monjalon
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 4/4] app/testpmd: add commands for " Gregory Etelson
2020-10-04 13:59 ` Ori Kam
2020-10-14 17:25 ` [dpdk-dev] [PATCH v4 0/4] Tunnel Offload API Ferruh Yigit
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 0/3] " Gregory Etelson
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-15 22:47 ` [dpdk-dev] [PATCH v5 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 0/3] Tunnel Offload API Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 10:34 ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 12:10 ` [dpdk-dev] [PATCH v7 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 15:41 ` Kinsella, Ray
2021-03-02 9:22 ` Ivan Malov
2021-03-02 9:42 ` Thomas Monjalon
2021-03-03 14:03 ` Ivan Malov
2021-03-04 6:35 ` Eli Britstein
2021-03-08 14:01 ` Gregory Etelson
2020-10-16 12:51 ` Gregory Etelson [this message]
2020-10-16 13:19 ` [dpdk-dev] [PATCH v8 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 14:20 ` Ferruh Yigit
2020-10-18 12:15 ` [dpdk-dev] [PATCH] ethdev: rename tunnel offload callbacks Gregory Etelson
2020-10-19 8:31 ` Ferruh Yigit
2020-10-19 9:56 ` Kinsella, Ray
2020-10-19 21:29 ` Thomas Monjalon
2020-10-21 9:22 ` [dpdk-dev] [PATCH] net/mlx5: implement tunnel offload API Gregory Etelson
2020-10-22 16:00 ` [dpdk-dev] [PATCH v2] " Gregory Etelson
2020-10-23 13:49 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
2020-10-23 13:57 ` [dpdk-dev] [PATCH v4] " Gregory Etelson
2020-10-25 14:08 ` [dpdk-dev] [PATCH v5] " Gregory Etelson
2020-10-25 15:01 ` Raslan Darawsheh
2020-10-27 16:12 ` [dpdk-dev] [PATCH] net/mlx5: tunnel offload code cleanup Gregory Etelson
2020-10-27 16:29 ` Slava Ovsiienko
2020-10-27 17:16 ` Raslan Darawsheh
2020-10-28 12:33 ` Andrew Rybchenko
2020-10-28 4:58 ` [dpdk-dev] [PATCH] net/mlx5: fix tunnel flow destroy Gregory Etelson
2020-11-02 16:27 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20201016125108.22997-4-getelson@nvidia.com \
--to=getelson@nvidia.com \
--cc=asafp@nvidia.com \
--cc=beilei.xing@intel.com \
--cc=bernard.iremonger@intel.com \
--cc=dev@dpdk.org \
--cc=elibr@nvidia.com \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=ozsh@nvidia.com \
--cc=rasland@nvidia.com \
--cc=wenzhuo.lu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).