DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gregory Etelson <getelson@nvidia.com>
To: <dev@dpdk.org>
Cc: <getelson@nvidia.com>, <matan@nvidia.com>, <rasland@nvidia.com>,
	<elibr@nvidia.com>, <ozsh@nvidia.com>,
	<ajit.khaparde@broadcom.com>, <asafp@nvidia.com>,
	Eli Britstein <elibr@mellanox.com>, Ori Kam <orika@nvidia.com>,
	Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
	"Ray Kinsella" <mdr@ashroe.eu>,
	Neil Horman <nhorman@tuxdriver.com>,
	"Thomas Monjalon" <thomas@monjalon.net>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Subject: [dpdk-dev] [PATCH v7 2/3] ethdev: tunnel offload model
Date: Fri, 16 Oct 2020 13:33:59 +0300	[thread overview]
Message-ID: <20201016103400.21311-3-getelson@nvidia.com> (raw)
In-Reply-To: <20201016103400.21311-1-getelson@nvidia.com>

From: Eli Britstein <elibr@mellanox.com>

rte_flow API provides the building blocks for vendor-agnostic flow
classification offloads. The rte_flow "patterns" and "actions"
primitives are fine-grained, thus enabling DPDK applications the
flexibility to offload network stacks and complex pipelines.
Applications wishing to offload tunneled traffic are required to use
the rte_flow primitives, such as group, meta, mark, tag, and others to
model their high-level objects.  The hardware model design for
high-level software objects is not trivial.  Furthermore, an optimal
design is often vendor-specific.

When hardware offloads tunneled traffic in multi-group logic,
partially offloaded packets may arrive to the application after they
were modified in hardware. In this case, the application may need to
restore the original packet headers. Consider the following sequence:
The application decaps a packet in one group and jumps to a second
group where it tries to match on a 5-tuple, that will miss and send
the packet to the application. In this case, the application does not
receive the original packet but a modified one. Also, in this case,
the application cannot match on the outer header fields, such as VXLAN
vni and 5-tuple.

There are several possible ways to use rte_flow "patterns" and
"actions" to resolve the issues above. For example:
1 Mapping headers to a hardware registers using the
rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
2 Apply the decap only at the last offload stage after all the
"patterns" were matched and the packet will be fully offloaded.
Every approach has its pros and cons and is highly dependent on the
hardware vendor.  For example, some hardware may have a limited number
of registers while other hardware could not support inner actions and
must decap before accessing inner headers.

The tunnel offload model resolves these issues. The model goals are:
1 Provide a unified application API to offload tunneled traffic that
is capable to match on outer headers after decap.
2 Allow the application to restore the outer header of partially
offloaded packets.

The tunnel offload model does not introduce new elements to the
existing RTE flow model and is implemented as a set of helper
functions.

For the application to work with the tunnel offload API it
has to adjust flow rules in multi-table tunnel offload in the
following way:
1 Remove explicit call to decap action and replace it with PMD actions
obtained from rte_flow_tunnel_decap_and_set() helper.
2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
other rules in the tunnel offload sequence.

VXLAN Code example:

Assume application needs to do inner NAT on the VXLAN packet.
The first  rule in group 0:

flow create <port id> ingress group 0
  pattern eth / ipv4 / udp dst is 4789 / vxlan / end
  actions {pmd actions} / jump group 3 / end

The first VXLAN packet that arrives matches the rule in group 0 and
jumps to group 3.  In group 3 the packet will miss since there is no
flow to match and will be sent to the application.  Application  will
call rte_flow_get_restore_info() to get the packet outer header.

Application will insert a new rule in group 3 to match outer and inner
headers:

flow create <port id> ingress group 3
  pattern {pmd items} / eth / ipv4 dst is 172.10.10.1 /
          udp dst 4789 / vxlan vni is 10 /
          ipv4 dst is 184.1.2.3 / end
  actions  set_ipv4_dst  186.1.1.1 / queue index 3 / end

Resulting of the rules will be that VXLAN packet with vni=10, outer
IPv4 dst=172.10.10.1 and inner IPv4 dst=184.1.2.3 will be received
decapped on queue 3 with IPv4 dst=186.1.1.1

Note: The packet in group 3 is considered decapped. All actions in
that group will be done on the header that was inner before decap. The
application may specify an outer header to be matched on.  It's PMD
responsibility to translate these items to outer metadata.

API usage:

/**
 * 1. Initiate RTE flow tunnel object
 */
const struct rte_flow_tunnel tunnel = {
  .type = RTE_FLOW_ITEM_TYPE_VXLAN,
  .tun_id = 10,
}

/**
 * 2. Obtain PMD tunnel actions
 *
 * pmd_actions is an intermediate variable application uses to
 * compile actions array
 */
struct rte_flow_action **pmd_actions;
rte_flow_tunnel_decap_and_set(&tunnel, &pmd_actions,
                              &num_pmd_actions, &error);
/**
 * 3. offload the first  rule
 * matching on VXLAN traffic and jumps to group 3
 * (implicitly decaps packet)
 */
app_actions  =   jump group 3
rule_items = app_items;  /** eth / ipv4 / udp / vxlan  */
rule_actions = { pmd_actions, app_actions };
attr.group = 0;
flow_1 = rte_flow_create(port_id, &attr,
                         rule_items, rule_actions, &error);

/**
  * 4. after flow creation application does not need to keep the
  * tunnel action resources.
  */
rte_flow_tunnel_action_release(port_id, pmd_actions,
                               num_pmd_actions);
/**
  * 5. After partially offloaded packet miss because there was no
  * matching rule handle miss on group 3
  */
struct rte_flow_restore_info info;
rte_flow_get_restore_info(port_id, mbuf, &info, &error);

/**
 * 6. Offload NAT rule:
 */
app_items = { eth / ipv4 dst is 172.10.10.1 / udp dst 4789 /
            vxlan vni is 10 / ipv4 dst is 184.1.2.3 }
app_actions = { set_ipv4_dst 186.1.1.1 / queue index 3 }

rte_flow_tunnel_match(&info.tunnel, &pmd_items,
                      &num_pmd_items,  &error);
rule_items = {pmd_items, app_items};
rule_actions = app_actions;
attr.group = info.group_id;
flow_2 = rte_flow_create(port_id, &attr,
                         rule_items, rule_actions, &error);

/**
 * 7. Release PMD items after rule creation
 */
rte_flow_tunnel_item_release(port_id,
                             pmd_items, num_pmd_items);

References
1. https://mails.dpdk.org/archives/dev/2020-June/index.html

Signed-off-by: Eli Britstein <elibr@mellanox.com>
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

---
v5:
* rebase to next-net

v6:
* update the patch comment
* update tunnel offload section in rte_flow.rst
---
 doc/guides/prog_guide/rte_flow.rst       |  78 +++++++++
 doc/guides/rel_notes/release_20_11.rst   |   5 +
 lib/librte_ethdev/rte_ethdev_version.map |   5 +
 lib/librte_ethdev/rte_flow.c             | 112 +++++++++++++
 lib/librte_ethdev/rte_flow.h             | 195 +++++++++++++++++++++++
 lib/librte_ethdev/rte_flow_driver.h      |  32 ++++
 6 files changed, 427 insertions(+)

diff --git a/doc/guides/prog_guide/rte_flow.rst b/doc/guides/prog_guide/rte_flow.rst
index 7fb5ec9059..8dc048c6f4 100644
--- a/doc/guides/prog_guide/rte_flow.rst
+++ b/doc/guides/prog_guide/rte_flow.rst
@@ -3131,6 +3131,84 @@ operations include:
 - Duplication of a complete flow rule description.
 - Pattern item or action name retrieval.
 
+Tunneled traffic offload
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+rte_flow API provides the building blocks for vendor-agnostic flow
+classification offloads. The rte_flow "patterns" and "actions"
+primitives are fine-grained, thus enabling DPDK applications the
+flexibility to offload network stacks and complex pipelines.
+Applications wishing to offload tunneled traffic are required to use
+the rte_flow primitives, such as group, meta, mark, tag, and others to
+model their high-level objects.  The hardware model design for
+high-level software objects is not trivial.  Furthermore, an optimal
+design is often vendor-specific.
+
+When hardware offloads tunneled traffic in multi-group logic,
+partially offloaded packets may arrive to the application after they
+were modified in hardware. In this case, the application may need to
+restore the original packet headers. Consider the following sequence:
+The application decaps a packet in one group and jumps to a second
+group where it tries to match on a 5-tuple, that will miss and send
+the packet to the application. In this case, the application does not
+receive the original packet but a modified one. Also, in this case,
+the application cannot match on the outer header fields, such as VXLAN
+vni and 5-tuple.
+
+There are several possible ways to use rte_flow "patterns" and
+"actions" to resolve the issues above. For example:
+
+1 Mapping headers to a hardware registers using the
+rte_flow_action_mark/rte_flow_action_tag/rte_flow_set_meta objects.
+
+2 Apply the decap only at the last offload stage after all the
+"patterns" were matched and the packet will be fully offloaded.
+
+Every approach has its pros and cons and is highly dependent on the
+hardware vendor.  For example, some hardware may have a limited number
+of registers while other hardware could not support inner actions and
+must decap before accessing inner headers.
+
+The tunnel offload model resolves these issues. The model goals are:
+
+1 Provide a unified application API to offload tunneled traffic that
+is capable to match on outer headers after decap.
+
+2 Allow the application to restore the outer header of partially
+offloaded packets.
+
+The tunnel offload model does not introduce new elements to the
+existing RTE flow model and is implemented as a set of helper
+functions.
+
+For the application to work with the tunnel offload API it
+has to adjust flow rules in multi-table tunnel offload in the
+following way:
+
+1 Remove explicit call to decap action and replace it with PMD actions
+obtained from rte_flow_tunnel_decap_and_set() helper.
+
+2 Add PMD items obtained from rte_flow_tunnel_match() helper to all
+other rules in the tunnel offload sequence.
+
+The model requirements:
+
+Software application must initialize
+rte_tunnel object with tunnel parameters before calling
+rte_flow_tunnel_decap_set() & rte_flow_tunnel_match().
+
+PMD actions array obtained in rte_flow_tunnel_decap_set() must be
+released by application with rte_flow_action_release() call.
+
+PMD items array obtained with rte_flow_tunnel_match() must be released
+by application with rte_flow_item_release() call.  Application can
+release PMD items and actions after rule was created. However, if the
+application needs to create additional rule for the same tunnel it
+will need to obtain PMD items again.
+
+Application cannot destroy rte_tunnel object before it releases all
+PMD actions & PMD items referencing that tunnel.
+
 Caveats
 -------
 
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 9155b468d6..f125ce79dd 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -121,6 +121,11 @@ New Features
   * Flow rule verification was updated to accept private PMD
     items and actions.
 
+* **Added generic API to offload tunneled traffic and restore missed packet.**
+
+  * Added a new hardware independent helper API to RTE flow library that
+    offloads tunneled traffic and restores missed packets.
+
 * **Updated Cisco enic driver.**
 
   * Added support for VF representors with single-queue Tx/Rx and flow API
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index f64c379ac2..8ddda2547f 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -239,6 +239,11 @@ EXPERIMENTAL {
 	rte_flow_shared_action_destroy;
 	rte_flow_shared_action_query;
 	rte_flow_shared_action_update;
+	rte_flow_tunnel_decap_set;
+	rte_flow_tunnel_match;
+	rte_flow_get_restore_info;
+	rte_flow_tunnel_action_decap_release;
+	rte_flow_tunnel_item_release;
 };
 
 INTERNAL {
diff --git a/lib/librte_ethdev/rte_flow.c b/lib/librte_ethdev/rte_flow.c
index b74ea5593a..380c5cae2c 100644
--- a/lib/librte_ethdev/rte_flow.c
+++ b/lib/librte_ethdev/rte_flow.c
@@ -1143,3 +1143,115 @@ rte_flow_shared_action_query(uint16_t port_id,
 				       data, error);
 	return flow_err(port_id, ret, error);
 }
+
+int
+rte_flow_tunnel_decap_set(uint16_t port_id,
+			  struct rte_flow_tunnel *tunnel,
+			  struct rte_flow_action **actions,
+			  uint32_t *num_of_actions,
+			  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->tunnel_decap_set)) {
+		return flow_err(port_id,
+				ops->tunnel_decap_set(dev, tunnel, actions,
+						      num_of_actions, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_tunnel_match(uint16_t port_id,
+		      struct rte_flow_tunnel *tunnel,
+		      struct rte_flow_item **items,
+		      uint32_t *num_of_items,
+		      struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->tunnel_match)) {
+		return flow_err(port_id,
+				ops->tunnel_match(dev, tunnel, items,
+						  num_of_items, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_get_restore_info(uint16_t port_id,
+			  struct rte_mbuf *m,
+			  struct rte_flow_restore_info *restore_info,
+			  struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->get_restore_info)) {
+		return flow_err(port_id,
+				ops->get_restore_info(dev, m, restore_info,
+						      error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_tunnel_action_decap_release(uint16_t port_id,
+				     struct rte_flow_action *actions,
+				     uint32_t num_of_actions,
+				     struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->action_release)) {
+		return flow_err(port_id,
+				ops->action_release(dev, actions,
+						    num_of_actions, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
+
+int
+rte_flow_tunnel_item_release(uint16_t port_id,
+			     struct rte_flow_item *items,
+			     uint32_t num_of_items,
+			     struct rte_flow_error *error)
+{
+	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+	const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
+
+	if (unlikely(!ops))
+		return -rte_errno;
+	if (likely(!!ops->item_release)) {
+		return flow_err(port_id,
+				ops->item_release(dev, items,
+						  num_of_items, error),
+				error);
+	}
+	return rte_flow_error_set(error, ENOTSUP,
+				  RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+				  NULL, rte_strerror(ENOTSUP));
+}
diff --git a/lib/librte_ethdev/rte_flow.h b/lib/librte_ethdev/rte_flow.h
index 48395284b5..a8eac4deb8 100644
--- a/lib/librte_ethdev/rte_flow.h
+++ b/lib/librte_ethdev/rte_flow.h
@@ -3620,6 +3620,201 @@ rte_flow_shared_action_query(uint16_t port_id,
 			     void *data,
 			     struct rte_flow_error *error);
 
+/* Tunnel has a type and the key information. */
+struct rte_flow_tunnel {
+	/**
+	 * Tunnel type, for example RTE_FLOW_ITEM_TYPE_VXLAN,
+	 * RTE_FLOW_ITEM_TYPE_NVGRE etc.
+	 */
+	enum rte_flow_item_type	type;
+	uint64_t tun_id; /**< Tunnel identification. */
+
+	RTE_STD_C11
+	union {
+		struct {
+			rte_be32_t src_addr; /**< IPv4 source address. */
+			rte_be32_t dst_addr; /**< IPv4 destination address. */
+		} ipv4;
+		struct {
+			uint8_t src_addr[16]; /**< IPv6 source address. */
+			uint8_t dst_addr[16]; /**< IPv6 destination address. */
+		} ipv6;
+	};
+	rte_be16_t tp_src; /**< Tunnel port source. */
+	rte_be16_t tp_dst; /**< Tunnel port destination. */
+	uint16_t   tun_flags; /**< Tunnel flags. */
+
+	bool       is_ipv6; /**< True for valid IPv6 fields. Otherwise IPv4. */
+
+	/**
+	 * the following members are required to restore packet
+	 * after miss
+	 */
+	uint8_t    tos; /**< TOS for IPv4, TC for IPv6. */
+	uint8_t    ttl; /**< TTL for IPv4, HL for IPv6. */
+	uint32_t label; /**< Flow Label for IPv6. */
+};
+
+/**
+ * Indicate that the packet has a tunnel.
+ */
+#define RTE_FLOW_RESTORE_INFO_TUNNEL  (1ULL << 0)
+
+/**
+ * Indicate that the packet has a non decapsulated tunnel header.
+ */
+#define RTE_FLOW_RESTORE_INFO_ENCAPSULATED  (1ULL << 1)
+
+/**
+ * Indicate that the packet has a group_id.
+ */
+#define RTE_FLOW_RESTORE_INFO_GROUP_ID  (1ULL << 2)
+
+/**
+ * Restore information structure to communicate the current packet processing
+ * state when some of the processing pipeline is done in hardware and should
+ * continue in software.
+ */
+struct rte_flow_restore_info {
+	/**
+	 * Bitwise flags (RTE_FLOW_RESTORE_INFO_*) to indicate validation of
+	 * other fields in struct rte_flow_restore_info.
+	 */
+	uint64_t flags;
+	uint32_t group_id; /**< Group ID where packed missed */
+	struct rte_flow_tunnel tunnel; /**< Tunnel information. */
+};
+
+/**
+ * Allocate an array of actions to be used in rte_flow_create, to implement
+ * tunnel-decap-set for the given tunnel.
+ * Sample usage:
+ *   actions vxlan_decap / tunnel-decap-set(tunnel properties) /
+ *            jump group 0 / end
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] tunnel
+ *   Tunnel properties.
+ * @param[out] actions
+ *   Array of actions to be allocated by the PMD. This array should be
+ *   concatenated with the actions array provided to rte_flow_create.
+ * @param[out] num_of_actions
+ *   Number of actions allocated.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_tunnel_decap_set(uint16_t port_id,
+			  struct rte_flow_tunnel *tunnel,
+			  struct rte_flow_action **actions,
+			  uint32_t *num_of_actions,
+			  struct rte_flow_error *error);
+
+/**
+ * Allocate an array of items to be used in rte_flow_create, to implement
+ * tunnel-match for the given tunnel.
+ * Sample usage:
+ *   pattern tunnel-match(tunnel properties) / outer-header-matches /
+ *           inner-header-matches / end
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] tunnel
+ *   Tunnel properties.
+ * @param[out] items
+ *   Array of items to be allocated by the PMD. This array should be
+ *   concatenated with the items array provided to rte_flow_create.
+ * @param[out] num_of_items
+ *   Number of items allocated.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_tunnel_match(uint16_t port_id,
+		      struct rte_flow_tunnel *tunnel,
+		      struct rte_flow_item **items,
+		      uint32_t *num_of_items,
+		      struct rte_flow_error *error);
+
+/**
+ * Populate the current packet processing state, if exists, for the given mbuf.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] m
+ *   Mbuf struct.
+ * @param[out] info
+ *   Restore information. Upon success contains the HW state.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_get_restore_info(uint16_t port_id,
+			  struct rte_mbuf *m,
+			  struct rte_flow_restore_info *info,
+			  struct rte_flow_error *error);
+
+/**
+ * Release the action array as allocated by rte_flow_tunnel_decap_set.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] actions
+ *   Array of actions to be released.
+ * @param[in] num_of_actions
+ *   Number of elements in actions array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_tunnel_action_decap_release(uint16_t port_id,
+				     struct rte_flow_action *actions,
+				     uint32_t num_of_actions,
+				     struct rte_flow_error *error);
+
+/**
+ * Release the item array as allocated by rte_flow_tunnel_match.
+ *
+ * @param port_id
+ *   Port identifier of Ethernet device.
+ * @param[in] items
+ *   Array of items to be released.
+ * @param[in] num_of_items
+ *   Number of elements in item array.
+ * @param[out] error
+ *   Perform verbose error reporting if not NULL. PMDs initialize this
+ *   structure in case of error only.
+ *
+ * @return
+ *   0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+__rte_experimental
+int
+rte_flow_tunnel_item_release(uint16_t port_id,
+			     struct rte_flow_item *items,
+			     uint32_t num_of_items,
+			     struct rte_flow_error *error);
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_ethdev/rte_flow_driver.h b/lib/librte_ethdev/rte_flow_driver.h
index 58f56b0262..bd5ffc0bb1 100644
--- a/lib/librte_ethdev/rte_flow_driver.h
+++ b/lib/librte_ethdev/rte_flow_driver.h
@@ -131,6 +131,38 @@ struct rte_flow_ops {
 		 const struct rte_flow_shared_action *shared_action,
 		 void *data,
 		 struct rte_flow_error *error);
+	/** See rte_flow_tunnel_decap_set() */
+	int (*tunnel_decap_set)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_tunnel *tunnel,
+		 struct rte_flow_action **pmd_actions,
+		 uint32_t *num_of_actions,
+		 struct rte_flow_error *err);
+	/** See rte_flow_tunnel_match() */
+	int (*tunnel_match)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_tunnel *tunnel,
+		 struct rte_flow_item **pmd_items,
+		 uint32_t *num_of_items,
+		 struct rte_flow_error *err);
+	/** See rte_flow_get_rte_flow_restore_info() */
+	int (*get_restore_info)
+		(struct rte_eth_dev *dev,
+		 struct rte_mbuf *m,
+		 struct rte_flow_restore_info *info,
+		 struct rte_flow_error *err);
+	/** See rte_flow_action_tunnel_decap_release() */
+	int (*action_release)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_action *pmd_actions,
+		 uint32_t num_of_actions,
+		 struct rte_flow_error *err);
+	/** See rte_flow_item_release() */
+	int (*item_release)
+		(struct rte_eth_dev *dev,
+		 struct rte_flow_item *pmd_items,
+		 uint32_t num_of_items,
+		 struct rte_flow_error *err);
 };
 
 /**
-- 
2.28.0


  parent reply	other threads:[~2020-10-16 10:35 UTC|newest]

Thread overview: 95+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-25 16:03 [dpdk-dev] [PATCH 0/2] " Gregory Etelson
2020-06-25 16:03 ` [dpdk-dev] [PATCH 1/2] ethdev: allow negative values in flow rule types Gregory Etelson
2020-07-05 13:34   ` Andrew Rybchenko
2020-08-19 14:33     ` Gregory Etelson
2020-06-25 16:03 ` [dpdk-dev] [PATCH 2/2] ethdev: tunnel offload model Gregory Etelson
     [not found]   ` <DB8PR05MB6761ED02BCD188771BDCDE64A86F0@DB8PR05MB6761.eurprd05.prod.outlook.com>
     [not found]     ` <38d3513f-1261-0fbc-7c56-f83ced61f97a@ashroe.eu>
2020-07-01  6:52       ` Gregory Etelson
2020-07-13  8:21         ` Thomas Monjalon
2020-07-13 13:23           ` Gregory Etelson
2020-07-05 14:50   ` Andrew Rybchenko
2020-08-19 14:30     ` Gregory Etelson
2020-07-05 13:39 ` [dpdk-dev] [PATCH 0/2] " Andrew Rybchenko
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 0/4] Tunnel Offload API Gregory Etelson
2020-09-08 20:15   ` [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-09-15  4:36     ` Ajit Khaparde
2020-09-15  8:46       ` Andrew Rybchenko
2020-09-15 10:27         ` Gregory Etelson
2020-09-16 17:21           ` Gregory Etelson
2020-09-17  6:49             ` Andrew Rybchenko
2020-09-17  7:47               ` Ori Kam
2020-09-17 15:15                 ` Andrew Rybchenko
2020-09-17  7:56               ` Gregory Etelson
2020-09-17 15:18                 ` Andrew Rybchenko
2020-09-15  8:45     ` Andrew Rybchenko
2020-09-15 16:17       ` Gregory Etelson
2020-09-08 20:15   ` [dpdk-dev] [PATCH v2 2/4] ethdev: tunnel offload model Gregory Etelson
2020-09-08 20:15   ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-09-08 20:15   ` [dpdk-dev] [PATCH v2 4/4] app/testpmd: support " Gregory Etelson
2020-09-15  4:47     ` Ajit Khaparde
2020-09-15 10:44       ` Gregory Etelson
2020-09-30  9:18 ` [dpdk-dev] [PATCH v3 0/4] Tunnel Offload API Gregory Etelson
2020-09-30  9:18   ` [dpdk-dev] [PATCH v3 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-04  5:40     ` Ajit Khaparde
2020-10-04  9:24       ` Gregory Etelson
2020-09-30  9:18   ` [dpdk-dev] [PATCH v3 2/4] ethdev: tunnel offload model Gregory Etelson
2020-09-30  9:18   ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-09-30  9:18   ` [dpdk-dev] [PATCH v3 4/4] app/testpmd: add commands for " Gregory Etelson
2020-10-01  5:32     ` Ajit Khaparde
2020-10-01  9:05       ` Gregory Etelson
2020-10-04  5:40         ` Ajit Khaparde
2020-10-04  9:29           ` Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 0/4] Tunnel Offload API Gregory Etelson
2020-10-04 13:50   ` [dpdk-dev] [PATCH v4 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-14 23:40     ` Thomas Monjalon
2020-10-04 13:50   ` [dpdk-dev] [PATCH v4 2/4] ethdev: tunnel offload model Gregory Etelson
2020-10-06  9:47     ` Sriharsha Basavapatna
2020-10-07 12:36       ` Gregory Etelson
2020-10-14 17:23         ` Ferruh Yigit
2020-10-16  9:15           ` Gregory Etelson
2020-10-14 23:55     ` Thomas Monjalon
2020-10-04 13:50   ` [dpdk-dev] [PATCH v4 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-10-04 13:50   ` [dpdk-dev] [PATCH v4 4/4] app/testpmd: add commands for " Gregory Etelson
2020-10-04 13:59     ` Ori Kam
2020-10-14 17:25   ` [dpdk-dev] [PATCH v4 0/4] Tunnel Offload API Ferruh Yigit
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 0/3] " Gregory Etelson
2020-10-15 12:41   ` [dpdk-dev] [PATCH v5 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-15 12:41   ` [dpdk-dev] [PATCH v5 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-15 12:41   ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-15 22:47   ` [dpdk-dev] [PATCH v5 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16  8:55 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
2020-10-16  8:55   ` [dpdk-dev] [PATCH v6 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16  8:55   ` [dpdk-dev] [PATCH v6 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16  8:55   ` [dpdk-dev] [PATCH v6 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 0/3] Tunnel Offload API Gregory Etelson
2020-10-16 10:33   ` [dpdk-dev] [PATCH v7 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 10:33   ` Gregory Etelson [this message]
2020-10-16 10:34   ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 12:10   ` [dpdk-dev] [PATCH v7 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
2020-10-16 12:51   ` [dpdk-dev] [PATCH v8 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 12:51   ` [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 15:41     ` Kinsella, Ray
2021-03-02  9:22     ` Ivan Malov
2021-03-02  9:42       ` Thomas Monjalon
2021-03-03 14:03         ` Ivan Malov
2021-03-04  6:35           ` Eli Britstein
2021-03-08 14:01       ` Gregory Etelson
2020-10-16 12:51   ` [dpdk-dev] [PATCH v8 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 13:19   ` [dpdk-dev] [PATCH v8 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 14:20     ` Ferruh Yigit
2020-10-18 12:15 ` [dpdk-dev] [PATCH] ethdev: rename tunnel offload callbacks Gregory Etelson
2020-10-19  8:31   ` Ferruh Yigit
2020-10-19  9:56     ` Kinsella, Ray
2020-10-19 21:29       ` Thomas Monjalon
2020-10-21  9:22 ` [dpdk-dev] [PATCH] net/mlx5: implement tunnel offload API Gregory Etelson
2020-10-22 16:00 ` [dpdk-dev] [PATCH v2] " Gregory Etelson
2020-10-23 13:49 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
2020-10-23 13:57 ` [dpdk-dev] [PATCH v4] " Gregory Etelson
2020-10-25 14:08 ` [dpdk-dev] [PATCH v5] " Gregory Etelson
2020-10-25 15:01   ` Raslan Darawsheh
2020-10-27 16:12 ` [dpdk-dev] [PATCH] net/mlx5: tunnel offload code cleanup Gregory Etelson
2020-10-27 16:29   ` Slava Ovsiienko
2020-10-27 17:16   ` Raslan Darawsheh
2020-10-28 12:33     ` Andrew Rybchenko
2020-10-28  4:58 ` [dpdk-dev] [PATCH] net/mlx5: fix tunnel flow destroy Gregory Etelson
2020-11-02 16:27   ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201016103400.21311-3-getelson@nvidia.com \
    --to=getelson@nvidia.com \
    --cc=ajit.khaparde@broadcom.com \
    --cc=andrew.rybchenko@oktetlabs.ru \
    --cc=asafp@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=elibr@mellanox.com \
    --cc=elibr@nvidia.com \
    --cc=ferruh.yigit@intel.com \
    --cc=matan@nvidia.com \
    --cc=mdr@ashroe.eu \
    --cc=nhorman@tuxdriver.com \
    --cc=orika@nvidia.com \
    --cc=ozsh@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).