From: Gregory Etelson <getelson@mellanox.com>
To: "Kinsella, Ray" <mdr@ashroe.eu>
Cc: Matan Azrad <matan@mellanox.com>,
Raslan Darawsheh <rasland@mellanox.com>,
Ori Kam <orika@mellanox.com>,
John McNamara <john.mcnamara@intel.com>,
Marko Kovacevic <marko.kovacevic@intel.com>,
Neil Horman <nhorman@tuxdriver.com>,
Thomas Monjalon <thomas@monjalon.net>,
Ferruh Yigit <ferruh.yigit@intel.com>,
Andrew Rybchenko <arybchenko@solarflare.com>,
Ajit Khaparde <ajit.khaparde@broadcom.com>,
"sriharsha.basavapatna@broadcom.com"
<sriharsha.basavapatna@broadcom.com>,
"hemal.shah@broadcom.com" <hemal.shah@broadcom.com>,
Eli Britstein <elibr@mellanox.com>, Oz Shlomo <ozsh@mellanox.com>,
"dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 2/2] ethdev: tunnel offload model
Date: Wed, 1 Jul 2020 06:52:55 +0000 [thread overview]
Message-ID: <DB8PR05MB6761313272973BF937D953BFA86C0@DB8PR05MB6761.eurprd05.prod.outlook.com> (raw)
In-Reply-To: <38d3513f-1261-0fbc-7c56-f83ced61f97a@ashroe.eu>
> -----Original Message-----
> From: Kinsella, Ray <mdr@ashroe.eu>
> Sent: Tuesday, June 30, 2020 14:30
> To: Gregory Etelson <getelson@mellanox.com>
> Cc: Matan Azrad <matan@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; Ori Kam <orika@mellanox.com>; John McNamara
> <john.mcnamara@intel.com>; Marko Kovacevic
> <marko.kovacevic@intel.com>; Neil Horman <nhorman@tuxdriver.com>;
> Thomas Monjalon <thomas@monjalon.net>; Ferruh Yigit
> <ferruh.yigit@intel.com>; Andrew Rybchenko
> <arybchenko@solarflare.com>; Ajit Khaparde
> <ajit.khaparde@broadcom.com>; sriharsha.basavapatna@broadcom.com;
> hemal.shah@broadcom.com; Eli Britstein <elibr@mellanox.com>; Oz Shlomo
> <ozsh@mellanox.com>
> Subject: Re: [PATCH 2/2] ethdev: tunnel offload model
>
>
>
> On 30/06/2020 10:05, Gregory Etelson wrote:
> >
> > + maintainers
> >
> > -----Original Message-----
> > From: Gregory Etelson <getelson@mellanox.com>
> > Sent: Thursday, June 25, 2020 19:04
> > To: dev@dpdk.org
> > Cc: Gregory Etelson <getelson@mellanox.com>; Matan Azrad
> > <matan@mellanox.com>; Raslan Darawsheh <rasland@mellanox.com>; Eli
> > Britstein <elibr@mellanox.com>; Ori Kam <orika@mellanox.com>
> > Subject: [PATCH 2/2] ethdev: tunnel offload model
> >
> > From: Eli Britstein <elibr@mellanox.com>
> >
> > Hardware vendors implement tunneled traffic offload techniques
> differently. Although RTE flow API provides tools capable to offload all sorts
> of network stacks, software application must reference this hardware
> differences in flow rules compilation. As the result tunneled traffic flow rules
> that utilize hardware capabilities can be different for the same traffic.
> >
> > Tunnel port offload proposed in [1] provides software application with
> unified rules model for tunneled traffic regardless underlying hardware.
> > - The model introduces a concept of a virtual tunnel port (VTP).
> > - The model uses VTP to offload ingress tunneled network traffic
> > with RTE flow rules.
> > - The model is implemented as set of helper functions. Each PMD
> > implements VTP offload according to underlying hardware offload
> > capabilities. Applications must query PMD for VTP flow
> > items / actions before using in creation of a VTP flow rule.
> >
> > The model components:
> > - Virtual Tunnel Port (VTP) is a stateless software object that
> > describes tunneled network traffic. VTP object usually contains
> > descriptions of outer headers, tunnel headers and inner headers.
> > - Tunnel Steering flow Rule (TSR) detects tunneled packets and
> > delegates them to tunnel processing infrastructure, implemented
> > in PMD for optimal hardware utilization, for further processing.
> > - Tunnel Matching flow Rule (TMR) verifies packet configuration and
> > runs offload actions in case of a match.
> >
> > Application actions:
> > 1 Initialize VTP object according to tunnel
> > network parameters.
> > 2 Create TSR flow rule:
> > 2.1 Query PMD for VTP actions: application can query for VTP actions
> > more than once
> > int
> > rte_flow_tunnel_decap_set(uint16_t port_id,
> > struct rte_flow_tunnel *tunnel,
> > struct rte_flow_action **pmd_actions,
> > uint32_t *num_of_pmd_actions,
> > struct rte_flow_error *error);
> >
> > 2.2 Integrate PMD actions into TSR actions list.
> > 2.3 Create TSR flow rule:
> > flow create <port> group 0
> > match {tunnel items} / end
> > actions {PMD actions} / {App actions} / end
> >
> > 3 Create TMR flow rule:
> > 3.1 Query PMD for VTP items: application can query for VTP items
> > more than once
> > int
> > rte_flow_tunnel_match(uint16_t port_id,
> > struct rte_flow_tunnel *tunnel,
> > struct rte_flow_item **pmd_items,
> > uint32_t *num_of_pmd_items,
> > struct rte_flow_error *error);
> >
> > 3.2 Integrate PMD items into TMR items list:
> > 3.3 Create TMR flow rule
> > flow create <port> group 0
> > match {PMD items} / {APP items} / end
> > actions {offload actions} / end
> >
> > The model provides helper function call to restore packets that miss tunnel
> TMR rules to its original state:
> > int
> > rte_flow_get_restore_info(uint16_t port_id,
> > struct rte_mbuf *mbuf,
> > struct rte_flow_restore_info *info,
> > struct rte_flow_error *error);
> >
> > rte_tunnel object filled by the call inside rte_flow_restore_info *info
> parameter can be used by the application to create new TMR rule for that
> tunnel.
> >
> > The model requirements:
> > Software application must initialize
> > rte_tunnel object with tunnel parameters before calling
> > rte_flow_tunnel_decap_set() & rte_flow_tunnel_match().
> >
> > PMD actions array obtained in rte_flow_tunnel_decap_set() must be
> released by application with rte_flow_action_release() call.
> > Application can release the actionsfter TSR rule was created.
> >
> > PMD items array obtained with rte_flow_tunnel_match() must be released
> by application with rte_flow_item_release() call. Application can release the
> items after rule was created. However, if the application needs to create
> additional TMR rule for the same tunnel it will need to obtain PMD items
> again.
> >
> > Application cannot destroy rte_tunnel object before it releases all PMD
> actions & PMD items referencing that tunnel.
> >
> > [1]
> > https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmail
> > s.dpdk.org%2Farchives%2Fdev%2F2020-
> June%2F169656.html&data=02%7C01
> >
> %7Cgetelson%40mellanox.com%7C1178dd5eb0214d807d6d08d81ce8e739%
> 7Ca65297
> >
> 1c7d2e4d9ba6a4d149256f461b%7C0%7C0%7C637291133935729423&sd
> ata=G%2B
> > GIPy%2Bxz73sgmkem4jojYGKDDsXs8nKVK0Ktdek28c%3D&reserved=0
> >
> > Signed-off-by: Eli Britstein <elibr@mellanox.com>
> > Acked-by: Ori Kam <orika@mellanox.com>
> > ---
> > doc/guides/prog_guide/rte_flow.rst | 105 ++++++++++++
> > lib/librte_ethdev/rte_ethdev_version.map | 5 +
> > lib/librte_ethdev/rte_flow.c | 112 +++++++++++++
> > lib/librte_ethdev/rte_flow.h | 196 +++++++++++++++++++++++
> > lib/librte_ethdev/rte_flow_driver.h | 32 ++++
> > 5 files changed, 450 insertions(+)
> >
> > diff --git a/doc/guides/prog_guide/rte_flow.rst
> > b/doc/guides/prog_guide/rte_flow.rst
> > index d5dd18ce99..cfd98c2e7d 100644
> > --- a/doc/guides/prog_guide/rte_flow.rst
> > +++ b/doc/guides/prog_guide/rte_flow.rst
> > @@ -3010,6 +3010,111 @@ operations include:
> > - Duplication of a complete flow rule description.
> > - Pattern item or action name retrieval.
> >
> > +Tunneled traffic offload
> > +~~~~~~~~~~~~~~~~~~~~~~~~
> > +
> > +Provide software application with unified rules model for tunneled
> > +traffic regardless underlying hardware.
> > +
> > + - The model introduces a concept of a virtual tunnel port (VTP).
> > + - The model uses VTP to offload ingress tunneled network traffic
> > + with RTE flow rules.
> > + - The model is implemented as set of helper functions. Each PMD
> > + implements VTP offload according to underlying hardware offload
> > + capabilities. Applications must query PMD for VTP flow
> > + items / actions before using in creation of a VTP flow rule.
> > +
> > +The model components:
> > +
> > +- Virtual Tunnel Port (VTP) is a stateless software object that
> > + describes tunneled network traffic. VTP object usually contains
> > + descriptions of outer headers, tunnel headers and inner headers.
> > +- Tunnel Steering flow Rule (TSR) detects tunneled packets and
> > + delegates them to tunnel processing infrastructure, implemented
> > + in PMD for optimal hardware utilization, for further processing.
> > +- Tunnel Matching flow Rule (TMR) verifies packet configuration and
> > + runs offload actions in case of a match.
> > +
> > +Application actions:
> > +
> > +1 Initialize VTP object according to tunnel network parameters.
> > +
> > +2 Create TSR flow rule.
> > +
> > +2.1 Query PMD for VTP actions. Application can query for VTP actions
> more than once.
> > +
> > + .. code-block:: c
> > +
> > + int
> > + rte_flow_tunnel_decap_set(uint16_t port_id,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_action **pmd_actions,
> > + uint32_t *num_of_pmd_actions,
> > + struct rte_flow_error *error);
> > +
> > +2.2 Integrate PMD actions into TSR actions list.
> > +
> > +2.3 Create TSR flow rule.
> > +
> > + .. code-block:: console
> > +
> > + flow create <port> group 0 match {tunnel items} / end actions
> > + {PMD actions} / {App actions} / end
> > +
> > +3 Create TMR flow rule.
> > +
> > +3.1 Query PMD for VTP items. Application can query for VTP items more
> than once.
> > +
> > + .. code-block:: c
> > +
> > + int
> > + rte_flow_tunnel_match(uint16_t port_id,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_item **pmd_items,
> > + uint32_t *num_of_pmd_items,
> > + struct rte_flow_error *error);
> > +
> > +3.2 Integrate PMD items into TMR items list.
> > +
> > +3.3 Create TMR flow rule.
> > +
> > + .. code-block:: console
> > +
> > + flow create <port> group 0 match {PMD items} / {APP items} /
> > + end actions {offload actions} / end
> > +
> > +The model provides helper function call to restore packets that miss
> > +tunnel TMR rules to its original state:
> > +
> > +.. code-block:: c
> > +
> > + int
> > + rte_flow_get_restore_info(uint16_t port_id,
> > + struct rte_mbuf *mbuf,
> > + struct rte_flow_restore_info *info,
> > + struct rte_flow_error *error);
> > +
> > +rte_tunnel object filled by the call inside ``rte_flow_restore_info
> > +*info parameter`` can be used by the application to create new TMR
> > +rule for that tunnel.
> > +
> > +The model requirements:
> > +
> > +Software application must initialize
> > +rte_tunnel object with tunnel parameters before calling
> > +rte_flow_tunnel_decap_set() & rte_flow_tunnel_match().
> > +
> > +PMD actions array obtained in rte_flow_tunnel_decap_set() must be
> > +released by application with rte_flow_action_release() call.
> > +Application can release the actionsfter TSR rule was created.
> > +
> > +PMD items array obtained with rte_flow_tunnel_match() must be
> > +released by application with rte_flow_item_release() call.
> > +Application can release the items after rule was created. However, if
> > +the application needs to create additional TMR rule for the same
> > +tunnel it will need to obtain PMD items again.
> > +
> > +Application cannot destroy rte_tunnel object before it releases all
> > +PMD actions & PMD items referencing that tunnel.
> > +
> > Caveats
> > -------
> >
> > diff --git a/lib/librte_ethdev/rte_ethdev_version.map
> > b/lib/librte_ethdev/rte_ethdev_version.map
> > index 7155056045..63800811df 100644
> > --- a/lib/librte_ethdev/rte_ethdev_version.map
> > +++ b/lib/librte_ethdev/rte_ethdev_version.map
> > @@ -241,4 +241,9 @@ EXPERIMENTAL {
> > __rte_ethdev_trace_rx_burst;
> > __rte_ethdev_trace_tx_burst;
> > rte_flow_get_aged_flows;
> > + rte_flow_tunnel_decap_set;
> > + rte_flow_tunnel_match;
> > + rte_flow_tunnel_get_restore_info;
> > + rte_flow_tunnel_action_decap_release;
> > + rte_flow_tunnel_item_release;
> > };
> > diff --git a/lib/librte_ethdev/rte_flow.c
> > b/lib/librte_ethdev/rte_flow.c index c19d25649f..2dc5bfbb3f 100644
> > --- a/lib/librte_ethdev/rte_flow.c
> > +++ b/lib/librte_ethdev/rte_flow.c
> > @@ -1268,3 +1268,115 @@ rte_flow_get_aged_flows(uint16_t port_id,
> void **contexts,
> > RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > NULL, rte_strerror(ENOTSUP));
> > }
> > +
> > +int
> > +rte_flow_tunnel_decap_set(uint16_t port_id,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_action **actions,
> > + uint32_t *num_of_actions,
> > + struct rte_flow_error *error)
> > +{
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > + if (unlikely(!ops))
> > + return -rte_errno;
> > + if (likely(!!ops->tunnel_decap_set)) {
> > + return flow_err(port_id,
> > + ops->tunnel_decap_set(dev, tunnel, actions,
> > + num_of_actions, error),
> > + error);
> > + }
> > + return rte_flow_error_set(error, ENOTSUP,
> > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > + NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_tunnel_match(uint16_t port_id,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_item **items,
> > + uint32_t *num_of_items,
> > + struct rte_flow_error *error) {
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > + if (unlikely(!ops))
> > + return -rte_errno;
> > + if (likely(!!ops->tunnel_match)) {
> > + return flow_err(port_id,
> > + ops->tunnel_match(dev, tunnel, items,
> > + num_of_items, error),
> > + error);
> > + }
> > + return rte_flow_error_set(error, ENOTSUP,
> > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > + NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_tunnel_get_restore_info(uint16_t port_id,
> > + struct rte_mbuf *m,
> > + struct rte_flow_restore_info *restore_info,
> > + struct rte_flow_error *error)
> > +{
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > + if (unlikely(!ops))
> > + return -rte_errno;
> > + if (likely(!!ops->get_restore_info)) {
> > + return flow_err(port_id,
> > + ops->get_restore_info(dev, m, restore_info,
> > + error),
> > + error);
> > + }
> > + return rte_flow_error_set(error, ENOTSUP,
> > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > + NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_tunnel_action_decap_release(uint16_t port_id,
> > + struct rte_flow_action *actions,
> > + uint32_t num_of_actions,
> > + struct rte_flow_error *error) {
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > + if (unlikely(!ops))
> > + return -rte_errno;
> > + if (likely(!!ops->action_release)) {
> > + return flow_err(port_id,
> > + ops->action_release(dev, actions,
> > + num_of_actions, error),
> > + error);
> > + }
> > + return rte_flow_error_set(error, ENOTSUP,
> > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > + NULL, rte_strerror(ENOTSUP));
> > +}
> > +
> > +int
> > +rte_flow_tunnel_item_release(uint16_t port_id,
> > + struct rte_flow_item *items,
> > + uint32_t num_of_items,
> > + struct rte_flow_error *error) {
> > + struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> > + const struct rte_flow_ops *ops = rte_flow_ops_get(port_id, error);
> > +
> > + if (unlikely(!ops))
> > + return -rte_errno;
> > + if (likely(!!ops->item_release)) {
> > + return flow_err(port_id,
> > + ops->item_release(dev, items,
> > + num_of_items, error),
> > + error);
> > + }
> > + return rte_flow_error_set(error, ENOTSUP,
> > + RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
> > + NULL, rte_strerror(ENOTSUP));
> > +}
> > diff --git a/lib/librte_ethdev/rte_flow.h
> > b/lib/librte_ethdev/rte_flow.h index b0e4199192..1374b6e5a7 100644
> > --- a/lib/librte_ethdev/rte_flow.h
> > +++ b/lib/librte_ethdev/rte_flow.h
> > @@ -3324,6 +3324,202 @@ int
> > rte_flow_get_aged_flows(uint16_t port_id, void **contexts,
> > uint32_t nb_contexts, struct rte_flow_error *error);
> >
> > +/* Tunnel information. */
> > +__rte_experimental
>
> __rte_experimental is not required AFAIK on structure definitions, structure
> definitions are not symbols, just on exported functions and variables.
>
> Did you get a specific warning, that made you add this?
[Gregory Etelson] The attribute is not required in structures definition.
It's removed in v2 patch version
>
> > +struct rte_flow_ip_tunnel_key {
> > + rte_be64_t tun_id; /**< Tunnel identification. */
> > + union {
> > + struct {
> > + rte_be32_t src_addr; /**< IPv4 source address. */
> > + rte_be32_t dst_addr; /**< IPv4 destination address.
> */
> > + } ipv4;
> > + struct {
> > + uint8_t src_addr[16]; /**< IPv6 source address. */
> > + uint8_t dst_addr[16]; /**< IPv6 destination address.
> */
> > + } ipv6;
> > + } u;
> > + bool is_ipv6; /**< True for valid IPv6 fields. Otherwise IPv4. */
> > + rte_be16_t tun_flags; /**< Tunnel flags. */
> > + uint8_t tos; /**< TOS for IPv4, TC for IPv6. */
> > + uint8_t ttl; /**< TTL for IPv4, HL for IPv6. */
> > + rte_be32_t label; /**< Flow Label for IPv6. */
> > + rte_be16_t tp_src; /**< Tunnel port source. */
> > + rte_be16_t tp_dst; /**< Tunnel port destination. */ };
> > +
> > +
> > +/* Tunnel has a type and the key information. */ __rte_experimental
> > +struct rte_flow_tunnel {
> > + /**
> > + * Tunnel type, for example RTE_FLOW_ITEM_TYPE_VXLAN,
> > + * RTE_FLOW_ITEM_TYPE_NVGRE etc.
> > + */
> > + enum rte_flow_item_type type;
> > + struct rte_flow_ip_tunnel_key tun_info; /**< Tunnel key info. */
> > +};
> > +
> > +/**
> > + * Indicate that the packet has a tunnel.
> > + */
> > +#define RTE_FLOW_RESTORE_INFO_TUNNEL (1ULL << 0)
> > +
> > +/**
> > + * Indicate that the packet has a non decapsulated tunnel header.
> > + */
> > +#define RTE_FLOW_RESTORE_INFO_ENCAPSULATED (1ULL << 1)
> > +
> > +/**
> > + * Indicate that the packet has a group_id.
> > + */
> > +#define RTE_FLOW_RESTORE_INFO_GROUP_ID (1ULL << 2)
> > +
> > +/**
> > + * Restore information structure to communicate the current packet
> > +processing
> > + * state when some of the processing pipeline is done in hardware and
> > +should
> > + * continue in software.
> > + */
> > +__rte_experimental
> > +struct rte_flow_restore_info {
> > + /**
> > + * Bitwise flags (RTE_FLOW_RESTORE_INFO_*) to indicate validation
> of
> > + * other fields in struct rte_flow_restore_info.
> > + */
> > + uint64_t flags;
> > + uint32_t group_id; /**< Group ID. */
> > + struct rte_flow_tunnel tunnel; /**< Tunnel information. */ };
> > +
> > +/**
> > + * Allocate an array of actions to be used in rte_flow_create, to
> > +implement
> > + * tunnel-decap-set for the given tunnel.
> > + * Sample usage:
> > + * actions vxlan_decap / tunnel-decap-set(tunnel properties) /
> > + * jump group 0 / end
> > + *
> > + * @param port_id
> > + * Port identifier of Ethernet device.
> > + * @param[in] tunnel
> > + * Tunnel properties.
> > + * @param[out] actions
> > + * Array of actions to be allocated by the PMD. This array should be
> > + * concatenated with the actions array provided to rte_flow_create.
> > + * @param[out] num_of_actions
> > + * Number of actions allocated.
> > + * @param[out] error
> > + * Perform verbose error reporting if not NULL. PMDs initialize this
> > + * structure in case of error only.
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_tunnel_decap_set(uint16_t port_id,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_action **actions,
> > + uint32_t *num_of_actions,
> > + struct rte_flow_error *error);
> > +
> > +/**
> > + * Allocate an array of items to be used in rte_flow_create, to
> > +implement
> > + * tunnel-match for the given tunnel.
> > + * Sample usage:
> > + * pattern tunnel-match(tunnel properties) / outer-header-matches /
> > + * inner-header-matches / end
> > + *
> > + * @param port_id
> > + * Port identifier of Ethernet device.
> > + * @param[in] tunnel
> > + * Tunnel properties.
> > + * @param[out] items
> > + * Array of items to be allocated by the PMD. This array should be
> > + * concatenated with the items array provided to rte_flow_create.
> > + * @param[out] num_of_items
> > + * Number of items allocated.
> > + * @param[out] error
> > + * Perform verbose error reporting if not NULL. PMDs initialize this
> > + * structure in case of error only.
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_tunnel_match(uint16_t port_id,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_item **items,
> > + uint32_t *num_of_items,
> > + struct rte_flow_error *error);
> > +
> > +/**
> > + * Populate the current packet processing state, if exists, for the given
> mbuf.
> > + *
> > + * @param port_id
> > + * Port identifier of Ethernet device.
> > + * @param[in] m
> > + * Mbuf struct.
> > + * @param[out] info
> > + * Restore information. Upon success contains the HW state.
> > + * @param[out] error
> > + * Perform verbose error reporting if not NULL. PMDs initialize this
> > + * structure in case of error only.
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_tunnel_get_restore_info(uint16_t port_id,
> > + struct rte_mbuf *m,
> > + struct rte_flow_restore_info *info,
> > + struct rte_flow_error *error);
> > +
> > +/**
> > + * Release the action array as allocated by rte_flow_tunnel_decap_set.
> > + *
> > + * @param port_id
> > + * Port identifier of Ethernet device.
> > + * @param[in] actions
> > + * Array of actions to be released.
> > + * @param[in] num_of_actions
> > + * Number of elements in actions array.
> > + * @param[out] error
> > + * Perform verbose error reporting if not NULL. PMDs initialize this
> > + * structure in case of error only.
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_tunnel_action_decap_release(uint16_t port_id,
> > + struct rte_flow_action *actions,
> > + uint32_t num_of_actions,
> > + struct rte_flow_error *error);
> > +
> > +/**
> > + * Release the item array as allocated by rte_flow_tunnel_match.
> > + *
> > + * @param port_id
> > + * Port identifier of Ethernet device.
> > + * @param[in] items
> > + * Array of items to be released.
> > + * @param[in] num_of_items
> > + * Number of elements in item array.
> > + * @param[out] error
> > + * Perform verbose error reporting if not NULL. PMDs initialize this
> > + * structure in case of error only.
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_tunnel_item_release(uint16_t port_id,
> > + struct rte_flow_item *items,
> > + uint32_t num_of_items,
> > + struct rte_flow_error *error);
> > #ifdef __cplusplus
> > }
> > #endif
> > diff --git a/lib/librte_ethdev/rte_flow_driver.h
> > b/lib/librte_ethdev/rte_flow_driver.h
> > index 881cc469b7..ad1d7a2cdc 100644
> > --- a/lib/librte_ethdev/rte_flow_driver.h
> > +++ b/lib/librte_ethdev/rte_flow_driver.h
> > @@ -107,6 +107,38 @@ struct rte_flow_ops {
> > void **context,
> > uint32_t nb_contexts,
> > struct rte_flow_error *err);
> > + /** See rte_flow_tunnel_decap_set() */
> > + int (*tunnel_decap_set)
> > + (struct rte_eth_dev *dev,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_action **pmd_actions,
> > + uint32_t *num_of_actions,
> > + struct rte_flow_error *err);
> > + /** See rte_flow_tunnel_match() */
> > + int (*tunnel_match)
> > + (struct rte_eth_dev *dev,
> > + struct rte_flow_tunnel *tunnel,
> > + struct rte_flow_item **pmd_items,
> > + uint32_t *num_of_items,
> > + struct rte_flow_error *err);
> > + /** See rte_flow_get_rte_flow_restore_info() */
> > + int (*get_restore_info)
> > + (struct rte_eth_dev *dev,
> > + struct rte_mbuf *m,
> > + struct rte_flow_restore_info *info,
> > + struct rte_flow_error *err);
> > + /** See rte_flow_action_tunnel_decap_release() */
> > + int (*action_release)
> > + (struct rte_eth_dev *dev,
> > + struct rte_flow_action *pmd_actions,
> > + uint32_t num_of_actions,
> > + struct rte_flow_error *err);
> > + /** See rte_flow_item_release() */
> > + int (*item_release)
> > + (struct rte_eth_dev *dev,
> > + struct rte_flow_item *pmd_items,
> > + uint32_t num_of_items,
> > + struct rte_flow_error *err);
> > };
> >
> > /**
> > --
> > 2.25.1
> >
next prev parent reply other threads:[~2020-07-01 7:04 UTC|newest]
Thread overview: 95+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-25 16:03 [dpdk-dev] [PATCH 0/2] " Gregory Etelson
2020-06-25 16:03 ` [dpdk-dev] [PATCH 1/2] ethdev: allow negative values in flow rule types Gregory Etelson
2020-07-05 13:34 ` Andrew Rybchenko
2020-08-19 14:33 ` Gregory Etelson
2020-06-25 16:03 ` [dpdk-dev] [PATCH 2/2] ethdev: tunnel offload model Gregory Etelson
[not found] ` <DB8PR05MB6761ED02BCD188771BDCDE64A86F0@DB8PR05MB6761.eurprd05.prod.outlook.com>
[not found] ` <38d3513f-1261-0fbc-7c56-f83ced61f97a@ashroe.eu>
2020-07-01 6:52 ` Gregory Etelson [this message]
2020-07-13 8:21 ` Thomas Monjalon
2020-07-13 13:23 ` Gregory Etelson
2020-07-05 14:50 ` Andrew Rybchenko
2020-08-19 14:30 ` Gregory Etelson
2020-07-05 13:39 ` [dpdk-dev] [PATCH 0/2] " Andrew Rybchenko
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 0/4] Tunnel Offload API Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-09-15 4:36 ` Ajit Khaparde
2020-09-15 8:46 ` Andrew Rybchenko
2020-09-15 10:27 ` Gregory Etelson
2020-09-16 17:21 ` Gregory Etelson
2020-09-17 6:49 ` Andrew Rybchenko
2020-09-17 7:47 ` Ori Kam
2020-09-17 15:15 ` Andrew Rybchenko
2020-09-17 7:56 ` Gregory Etelson
2020-09-17 15:18 ` Andrew Rybchenko
2020-09-15 8:45 ` Andrew Rybchenko
2020-09-15 16:17 ` Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 2/4] ethdev: tunnel offload model Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-09-08 20:15 ` [dpdk-dev] [PATCH v2 4/4] app/testpmd: support " Gregory Etelson
2020-09-15 4:47 ` Ajit Khaparde
2020-09-15 10:44 ` Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 0/4] Tunnel Offload API Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-04 5:40 ` Ajit Khaparde
2020-10-04 9:24 ` Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 2/4] ethdev: tunnel offload model Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-09-30 9:18 ` [dpdk-dev] [PATCH v3 4/4] app/testpmd: add commands for " Gregory Etelson
2020-10-01 5:32 ` Ajit Khaparde
2020-10-01 9:05 ` Gregory Etelson
2020-10-04 5:40 ` Ajit Khaparde
2020-10-04 9:29 ` Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 0/4] Tunnel Offload API Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 1/4] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-14 23:40 ` Thomas Monjalon
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 2/4] ethdev: tunnel offload model Gregory Etelson
2020-10-06 9:47 ` Sriharsha Basavapatna
2020-10-07 12:36 ` Gregory Etelson
2020-10-14 17:23 ` Ferruh Yigit
2020-10-16 9:15 ` Gregory Etelson
2020-10-14 23:55 ` Thomas Monjalon
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 3/4] net/mlx5: implement tunnel offload API Gregory Etelson
2020-10-04 13:50 ` [dpdk-dev] [PATCH v4 4/4] app/testpmd: add commands for " Gregory Etelson
2020-10-04 13:59 ` Ori Kam
2020-10-14 17:25 ` [dpdk-dev] [PATCH v4 0/4] Tunnel Offload API Ferruh Yigit
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 0/3] " Gregory Etelson
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-15 12:41 ` [dpdk-dev] [PATCH v5 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-15 22:47 ` [dpdk-dev] [PATCH v5 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 " Gregory Etelson
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 8:55 ` [dpdk-dev] [PATCH v6 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 0/3] Tunnel Offload API Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 10:33 ` [dpdk-dev] [PATCH v7 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 10:34 ` [dpdk-dev] [PATCH v7 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 12:10 ` [dpdk-dev] [PATCH v7 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 " Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 1/3] ethdev: allow negative values in flow rule types Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 2/3] ethdev: tunnel offload model Gregory Etelson
2020-10-16 15:41 ` Kinsella, Ray
2021-03-02 9:22 ` Ivan Malov
2021-03-02 9:42 ` Thomas Monjalon
2021-03-03 14:03 ` Ivan Malov
2021-03-04 6:35 ` Eli Britstein
2021-03-08 14:01 ` Gregory Etelson
2020-10-16 12:51 ` [dpdk-dev] [PATCH v8 3/3] app/testpmd: add commands for tunnel offload API Gregory Etelson
2020-10-16 13:19 ` [dpdk-dev] [PATCH v8 0/3] Tunnel Offload API Ferruh Yigit
2020-10-16 14:20 ` Ferruh Yigit
2020-10-18 12:15 ` [dpdk-dev] [PATCH] ethdev: rename tunnel offload callbacks Gregory Etelson
2020-10-19 8:31 ` Ferruh Yigit
2020-10-19 9:56 ` Kinsella, Ray
2020-10-19 21:29 ` Thomas Monjalon
2020-10-21 9:22 ` [dpdk-dev] [PATCH] net/mlx5: implement tunnel offload API Gregory Etelson
2020-10-22 16:00 ` [dpdk-dev] [PATCH v2] " Gregory Etelson
2020-10-23 13:49 ` [dpdk-dev] [PATCH v3] " Gregory Etelson
2020-10-23 13:57 ` [dpdk-dev] [PATCH v4] " Gregory Etelson
2020-10-25 14:08 ` [dpdk-dev] [PATCH v5] " Gregory Etelson
2020-10-25 15:01 ` Raslan Darawsheh
2020-10-27 16:12 ` [dpdk-dev] [PATCH] net/mlx5: tunnel offload code cleanup Gregory Etelson
2020-10-27 16:29 ` Slava Ovsiienko
2020-10-27 17:16 ` Raslan Darawsheh
2020-10-28 12:33 ` Andrew Rybchenko
2020-10-28 4:58 ` [dpdk-dev] [PATCH] net/mlx5: fix tunnel flow destroy Gregory Etelson
2020-11-02 16:27 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DB8PR05MB6761313272973BF937D953BFA86C0@DB8PR05MB6761.eurprd05.prod.outlook.com \
--to=getelson@mellanox.com \
--cc=ajit.khaparde@broadcom.com \
--cc=arybchenko@solarflare.com \
--cc=dev@dpdk.org \
--cc=elibr@mellanox.com \
--cc=ferruh.yigit@intel.com \
--cc=hemal.shah@broadcom.com \
--cc=john.mcnamara@intel.com \
--cc=marko.kovacevic@intel.com \
--cc=matan@mellanox.com \
--cc=mdr@ashroe.eu \
--cc=nhorman@tuxdriver.com \
--cc=orika@mellanox.com \
--cc=ozsh@mellanox.com \
--cc=rasland@mellanox.com \
--cc=sriharsha.basavapatna@broadcom.com \
--cc=thomas@monjalon.net \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).